Jump to content
danwinslow

Dragoncart driver

Recommended Posts

One thing that comes to mind is that the deferred VBI is executed with interrupts enabled. If your deferred VBI routine occasionally takes so long that it recurses into itself, the result could be trashed zero page variables.

Share this post


Link to post
Share on other sites

Hmm. during normal operation the VBD does occasionally take longer than the VB interval, although in these testing scenarios that won't happen as I'm not actually processing messages. The normal code path prevents recursion with a gate...if it recurses before the previous one is finished it immediately RTI's again. But you two raise some interesting points. I'm not manipulating SEI/CLI at all, and I probably should be... I am not sure what happens if I SEI inside a VBI...does it prevent the next one from firing or does it do nothing?

 

The really odd thing about this, though, is that it does not happen unless I am strobing those $d500 variables...unless I am a victim of a really implausibly unlucky set of testing sessions.

Edited by danwinslow

Share this post


Link to post
Share on other sites

If you set the interrupt disable flag, the stage 1 VBL will fire (since it's an NMI), but the stage 2 (deferred) VBI will not. If you do an SEI from INSIDE the stage 1 VBL, I imagine the deferred one would be skipped then too.

Share this post


Link to post
Share on other sites

SEI within stage 1 won't do anything since the I flag is already set. CLI inside stage 1 would have a chance of breaking IRQ handlers.

 

You could also try running the code with other addresses, in case you're currently compiling it out and it's a quirky issue with code/data location.

Share this post


Link to post
Share on other sites

SEI CLI only work outside interrupts.
Place them around any code the relies on data that a maskable interrupt could change.
Updating a pointer that might be updated within an interrupt for example.

Not much you can do about a NMI other than some sort of software semaphore around code you need to protect.
Set a flag before the critical code, clear it afterwords.
Then you have to decide what the interrupt will do if a semaphore flag is set and it can't complete whatever it was supposed to do.

Non Maskable Interrupts are bad idea for most hardware designs unless it's to detect some sort of critical failure.

Share this post


Link to post
Share on other sites

SEI within stage 1 won't do anything since the I flag is already set. CLI inside stage 1 would have a chance of breaking IRQ handlers.

Yes: I got that arse about front. For some reason I remembered seeing CLI in the OS stage 1 VBL, but I'm obviously mistaken there too.

 

Not much you can do about a NMI other than some sort of software semaphore around code you need to protect.

I have NMI code which pulls the flags off the stack to see if I was set on entry. It's not the most elegant solution, though.

Share this post


Link to post
Share on other sites

Very interesting thread. Much of this is beyond my programming knowledge, however, on the hardware side of things I can tell you that the XE machines sometimes have interrupt timing issues due to unsteady phi2 signals. I wonder if the same thing would happen on a XL.

Share this post


Link to post
Share on other sites

Hi folks, just an update.

After having proven out the whole relocatable device driver framework, I'm now going through a rewrite of the core. Up until now it's been a modified version of IP65, but I am really taking it apart and putting in back together. The goal is making it smaller and especially much faster.

  • Stripping out all multi-computer references, old code segments for apple & commodore.
  • Centralizing error numbers and strings, converting error numbers into Atari 'DOS' style
  • Changing the packet input routines to defer reading the entire packet until the header has been looked at. Just getting the headers instead of the whole message allows me to skip the data copy in the case where I'm not interested in that message. Previously the entire message would be read before any decision whether it was wanted or not would be made. Of course, the packet filter on the chip blocks anything I'm REALLY not interested in, but there's still quite a bit of broadcast and other things that get past it on a busy network that I would waste a bunch of time copying.
  • Dropping support for internal application layers such as Telnet, tFTP, etc. IP65 was a mix of a core and a couple of small applications. Stripping out all that allows me to reduce the size, and also places application code outside of the driver, where it should be.
  • Changing the send routines such that UDP and ICMP packet sends are always driven by the using application and not the driver. There's a couple of small things like ARP and ICMP responses that are sends driven by the core, but any application will always be using it's own time ( i.e., not VBI ) to drive the send of it's packets. I'm not sure this will work the same for TCP, as that will have a queue of packets, but we'll see. Really trying to minimize VBI time usage.
  • Putting the concept of a 'socket', a structure that supports a connection, all the way into the core. I had wrapped a kind of socket-y layer around the old core, but it needs to be integrated in. Right now planning on 4 sockets that can be used by an application, in either UDP, TCP, or ICMP. This also allows me to drop a bunch of packet data copies that were getting done between the layers by changing a fixed internal send/recv buffer to an externalized ( ie., non-banked ) location. The goal is that all message data is received into and sent from the user buffer 'in place', with no copies being made. This does mean that socket data buffers have to reside in non-banked memory, and not between $4000-$8000 in main memory. Kind of a pain, but the speed gain is too big not to take though.
  • Also doing code refactoring, re-organizing and tidying up.
Edited by danwinslow
  • Like 9

Share this post


Link to post
Share on other sites

Yes, I am really working on that issue. The central problem is that for speed's sake, the packet buffers have to be in non-banked address ranges, that is, outside of $4000-$8000, where a user program can get directly at them without copying going on. So, for the UDP case, I'm looking so far at about 3k of non-banked space being used for the driver code and packet buffers. It may be that I'll offer a location option, i.e, a driver that embeds the packet buffers for speed within itself, and one that lets the banked code hold the buffers but is slightly slower due to copying. Under the OS doesn't really help in terms of access speeds, as copies are still required to provide user space access, and introduces some other issues of it's own.

 

The speed issue really only pertains to UDP, since what I am mostly trying to do is provide a gaming layer fast enough to directly transport joystick/keyboard events and game status data for multiplayer games. The copy speed really isn't that bad for normal stuff, I mean at most you'd be copying a couple pages each time you wanted to get/send a packet, but for gaming access I want to maximize speed.TCP will have to be kept in banked memory and copied out, there's no way I can buffer a TCP stream and not burn up a ton of non-banked space.

 

The rewrite is going well, I'm up to the ICMP processor now. UDP probably will start this weekend.

  • Like 1

Share this post


Link to post
Share on other sites

I think it all depends on your gaming code, when you are gaming and sending only the remote movement in tcp that would be just a few bytes, that could fit in a single packet. Sending/recieving a single package would take less then 3KB I think.

 

Do you build you new code modular ? I hope so because not always the full stack is needed, so for some games parts could be simply left out, to save memory. But a full stack would be usefull for programmers that want to write apps that makes use of this stack.

Share this post


Link to post
Share on other sites

Yes, although gaming packets will be sent via UDP, which skips a lot of the overhead of TCP. My target for gaming is to be able to automatically transmit and receive a 256 byte page every 60th of a second. UDP can lose packets, so there will be a sequencing and acknowledgement protocol 'under the hood', but still winds up producing about half the overhead of a normal TCP connection, regardless of the number of bytes being sent/received. The idea is that the game sets up the communication and then simply reads joystick movements and other data directly from the packet buffer in the game loop. That should be fast enough for most gaming applications. Expansions to more than 2 systems being paired and how a server/client system might run will still need to be explored. It sounds like UDP may be a bad choice, but actually this is how nearly all modern game clients that communicate to each other or a server are done.

 

As far as modularity goes, I was thinking of 3 options :

1. the bare minimum required to support the gaming interchange as described above. This will probably involve much less than 3k of real mem.

2. A UDP/ICMP stack that supports up to 4 sockets. I'm not sure how much this will take, but looks like 2-3k at the moment. I'd like to see someone use this, for instance, to develop a way to mount and share disks over the net :)

3. A UDP/ICMP and TCP stack, with 4 sockets possible for each protocol. The TCP buffers will definitely be hosted in extended memory, as a chain of buffers is needed, with possibly many packets waiting for completion of previous packets operations and retransmit requests, etc. So there will be a copy delay in accessing TCP data, but if course it's dependent on the size; typically for small data sizes it's barely noticeable.

Edited by danwinslow

Share this post


Link to post
Share on other sites

UDP is not a bad choice. I had recently been testing a remote gaming client that would allow multiple users on the Internet to play a video game hosted on a server. (Arcade and console games pre-internet that were never meant to support networks .) His first swing at it was TCP and it was a dog. In spite of megabit speeds on the modern internet the TCP protocol/overhead made what looked like a simple coding problem into a stuttering mess. A modern stack automatically waits for missing packets, retransmits, and does its best to keep packets in order. That plays havoc with a gaming client that has to send a lot of tiny packets quickly.

 

When it was switched to UDP it was an order of magnitude faster and throughput more consistent. Yes it can lose packets. You design the game server not to care. The missing data is important only at that single moment in time. The next packet that does arrive makes the missing packet irrelevant.

  • Like 1

Share this post


Link to post
Share on other sites

TCP is awful for real-time gaming. Drop a packet and you're into two minute retry delays and the like. UDP is the only way if you want something playable under poor connectivity situations. Or good ones for that matter.

  • Like 1

Share this post


Link to post
Share on other sites

Alrighty. Doing some throughput and stress testing. One of the things I've always been most concerned about is throughput. So right now, I'm bombing it with ping packets using nping.exe while a basic loop is running :

 

10 print A

20 a=a+1

30 goto 10

 

My Ping packet size is 228, which is close to the 'standard' packet of 256 that I wanted to support for gaming. Two packets per vbi (a send or receive and a short ack/nak) is my goal, and right now it's 'failing', meaning it's overrunning into the next VBI after about 100 packets at 17ms (one per VBI). For ICMP echo I'm not doing any copying of the packet at all (except out from the card), so I'm not sure whats going on. Interestingly, the basic program carries on pretty well...you can see it slow down a bit under stress but not nearly as much as I'd feared.

 

However, without Basic running the stack handles one per VBI very well. Not sure how Basic is stealing time from the VBI. Will keep this thread posted with developments.

Share this post


Link to post
Share on other sites

Er, belay all that. I had some internals controlling how many passes per VBI it made set wrongly. Set correctly it seems to process fine at two 228 byte packets per VBI, thats one echo request coming in and the generation and sending of a corresponding echo reply. Basic seems to cruise on fine while that's going on.

  • Like 4

Share this post


Link to post
Share on other sites

OK, so I've been struggling to get this compatible with basic. I think I've gotten it pretty well at this point, although there's way more ZP save/restore than I wanted to spend time on. I think towards the end of development I'll make a pass where I combine/reuse as many ZP as I can, and that should help. Any way, for a final test of Basic compatibility and whether or not there would be any CPU left over for a game to actually run in, I loaded a simple Basic version of Breakout! called Bounce. It's a decent basic version of breakout, kinda slow but you'd expect that. So I loaded up the driver and played through a game, and I couldn't tell any difference between playing when the driver was running and when it wasn't. That's good. Then I started up nping on my PC and started bombing the atari with 60 200 byte ping echo packets per second. I could definitely tell the game was slower, but it was still playable...AND it didn't crash or behave oddly in any way, and it echoed 100% of the ping requests. So all in all, I'm very happy with that outcome.

 

My original goal was a UDP driver that could service at least 60 256 byte packets per second, 'service' meaning a receive and a send, with some kind of possibility of a game running. I feel much more confident that it's possible to do that. Basic games? Yes, if the packet is small and the rate low. Action or quick or assembler games? Absolutely. The driver footprint is 2k of main memory and one ext bank. It does steal a lot of VBI time, though, so I'm not sure yet what a heavily VBI or DLI dependent game would run like.

 

*update*

I saw I had a copy of Yoomp! handy, so I tried running that with the driver. Yoomp was fine, but the driver stopped responding...I expect that yoomp took over the VBI more or less completely.

Edited by danwinslow
  • Like 2

Share this post


Link to post
Share on other sites

Ok, making progress. Here is a UDP send from Basic. It's crude, but it works. Very encouraging. I may write a small USR() handler to make things a bit easier for Basic.

10 OPEN #2,0,1,"I:"
20 XIO 20,#2,1,1,"":REM open channel 1 as UDP
30 I=(832)+2*16:REM calc to IOCB
40 C=(PEEK(I+13)*256)+PEEK(I+12):REM get channel address
50 PRINT "CHANNEL IS AT ";C
55 REM poke 192.168.2.2 into dest address for channel to get MAC
60 POKE C+22,192:POKE C+23,168:POKE C+24,2:POKE C+25,2
65 REM poke command 8 - 'ARP RESOLVE' into channel
66 POKE C,8
65 TRAP 2000:REM trap for 'mac not found'
70 XIO 19,#2,1,0,"":REM 19=process the channel instruction
80 REM if we get here, we have destination MAC
90 DIM A$(32):A$="HI THERE FROM ATARI"
95 REM GET OUTMSG BUFFER ADDRESS
100 OUTBUFF=(PEEK(C+9)*256)+PEEK(C+
105 REM POKE STRING INTO IT
110 FOR F=1 TO LEN(A$)
120 POKE OUTBUFF,ASC(A$(F,F))
130 NEXT F
140 REM POKE BUFFER LENGTH
150 POKE C+11,0:POKE C+12,LEN(A$)
160 REM POKE DEST PORT 45000
170 POKE C+5,INT(45000/256):POKE C+4,45000-(PEEK(C+5)*256)
180 REM POKE SOURCE PORT 44000
190 POKE C+3,INT(44000/256):POKE C+2,44000-(PEEK(C+3)*256)
200 REM SET COMMAND 4 (SEND_UDP)
210 POKE C,4
220 REM CALL THE PROCESSOR TO SEND MSG
230 XIO 19,#2,1,0,""
240 REM MESSAGE SENT!
250 CLOSE #2
260 END
1990 REM if its not found (err 131), keep trying
1995 REM this gives the arp packet time to get back
2000 ERR=PEEK(195):L=PEEK(186)+PEEK(187)*256:REM get error info
2010 PRINT ERR;" AT LINE ";L
2020 IF ERR<>131 THEN END:REM some other kind of error
2030 FOR N=1 TO 10:NEXT N:REM wait a bit for arp packet to get back
2040 GOTO L:REM go retry

Edited by danwinslow
  • Like 2

Share this post


Link to post
Share on other sites

Making what I consider to be good progress. Refactored driver is back to where it was capability wise when I decided to refactor; organization is much better and uses much less RAM. The BASIC interface capabilities are much more stable. Here's an example of a DNS resolve in BASIC:

5 REM DNS Resolve in BASIC
10 CHANNEL=1
20 GOSUB 1000 : REM open the handler and a channel
30 DIM DNS$(100):DNS$="WWW.CISCO.COM"
40 GOSUB 4000
998 END
999 REM ******************************
1000 REM open handler and CHANNEL
1010 OPEN #2,0,1,"I:"
1020 XIO 20,#2,CHANNEL,1,""
1030 I=(832)+2*16:REM calc to IOCB
1040 C=(PEEK(I+13)*256)+PEEK(I+12)
1050 U=(PEEK(I+15)*256)+PEEK(I+14)
1060 REM PRINT "CHANNEL AT ";C;", USR AT ";U
1070 RETURN
1999 REM *****************************
2000 REM CLOSE CHANNEL IN 'CHANNEL'
2010 RETURN:REM NOT SUPPORTED YET
2999 REM *****************************
3000 REM PROCESS CHANNEL 'CHANNEL'
3010 XIO 19,#2,CHANNEL,0,""
3020 RETURN
3999 REM *****************************
4000 REM SET UP FOR DNS
4010 A=ADR(DNS$)
4020 BO=C+32:REM Buffer offset
4030 POKE BO+1,INT(A/256):POKE BO,A-(PEEK(BO+1)*256)
4040 BL=C+34:REM BUFFER LENGTH OFFSET
4050 A=LEN(DNS$)
4060 POKE BL+1,INT(A/256):POKE BL,A-(PEEK(BL+1)*256)
4070 CC=C+0:REM CHANNEL COMMAND OFFSET
4080 POKE CC,7:REM COMMAND_RESOLVE
4090 GOSUB 3000:REM PROCESS CHANNEL
4100 CO=C+22:REM DESTINATION ADDR OFFSET
4200 PRINT DNS$;" IS AT IP ";PEEK(CO);".";PEEK(CO+1);".";PEEK(CO+2);".";PEEK(CO+3)
4998 RETURN

And, because someone mentioned C, here's the same thing in C :

#include <conio.h>
#include <channels.h>
#include <errors.h>
void main(int argc,char *argv[])
{
  unsigned char ret;
  struct aip_struct *channel;
  open_ethernet(2);
  init_channel(1,PROTO_ANY,&channel);
  cprintf("resolving %s...\r\n",argv[1]);
  channel_resolve(channel,argv[1]);
  if ( channel->errnum == 1)
  {
    cprintf("  ip: %i.%i.%i.%i\r\n",channel->daddr[0],
                                    channel->daddr[1],
                                    channel->daddr[2],
                                    channel->daddr[3],
            );
  }else{
    cprintf("%s\r\n",aip_errmsg(channel->errnum));
  }
  close_channel(channel);
  close_ethernet();
}

C has all the XIO bits hidden behind the 'channels.h' wrappers just to make things tidier, but its doing the exact same thing that the BASIC is.

 

So, I'm pretty close to releasing an 'alpha' driver for messing around with. Among many other amazing things, Altirra has very good support for the dragoncart thanks to Phaeron being a frikkin genius. I will release a small ATR with the driver and some example stuff on it to mess around with, plus some description of how to set up Altirra for the dragoncart. If you actually HAVE a dragoncart, all the better.

 

I am currently working on finalizing the UDP support, and I'll release the alpha test after that.

Edited by danwinslow
  • Like 6

Share this post


Link to post
Share on other sites

Just a small update-

Things are going very well with the TCP part of the driver. For instance, I am able to use basic to do a DAYTIME (port 13) request to a NIST server, and carry on the conversation and get back the date and time from the server. Continuing development now on the 'listen' side of things to accept incoming connections. It seems to me that my original goal of getting the entire stack in one 16K ext bank is going to work. Probably for buffers it would be handy to use a second bank, so I'm allowing for that as well, although it makes the buffer copies a lot slower. Best performance is to use your own buffers in main mem outside of the banking window. You are able to specify the size and location of your own buffers if you wish. So, anyway, it looks like the goal of having a driver usable from any language with CIO, and a full TCP stack is going to be feasible. I expect to be wrapping up version 1.0 around the turn of the year, and will be releasing the code at that time too.

Edited by danwinslow
  • Like 5

Share this post


Link to post
Share on other sites

Making progress. Moving the CIO functions to a more encapsulated version, where you don't need to do much outside of normal CIO calls. For example, the call to open a TCP socket :

 

OPEN #2,12,0,"I1:TCP:44000:13:time-a.nist.gov"

 

Opens a TCP socket as channel 1 and connects to the given host, using 44000 as the local port and 13 as the remote socket. It will block until a successful connect is made or a timeout happens. Right now the timeout is hardcoded, but I plan to use the auxiliary byte (3rd parameter) as the timeout/nonblocking flag value in the future. The direction byte (12) doesn't really do much but CIO wants it set to support later calls.

 

As it happens, the above remote port (13) is the DAYTIME protocol port, and the host will spit a time string back at you, something like "Tuesday, February 22, 1982 18:45:59-PST". The TCP driver will translate to ATASCII and store the data in a ring buffer (for channel 1, in this case), which can then be read by :

 

INPUT #2,A$

 

For a TCP connection, INPUT would give you everything up to a line termination, whereas for a UDP/ICMP connection it would give you the whole data packet. Not sure what GET/PUT should do...I guess GET would give you one character at a time, and PUT would send one character...which seems kind of dumb but whatever. Maybe PUT could be used to build up a buffer without regard to line termination and then be triggered to send the whole thing somehow. Of course, Basic is the only one with an issue with this, due to the formatting of the GET/PUT commands. CIO calls from other languages won't have these restrictions, and GET/PUT would be used to send/recieve data blocks without regard to line termination.

 

Of course, PRINT #2, "HI THERE" would send an appropriate packet out.

 

There are some XIO calls to do things like get/set the packet buffers, port and IP information, get a channel structure pointer, etc. Not super sure how to do the TCP listen/accept thing at this point in CIO, it might have to be implemented via XIO.

Edited by danwinslow
  • Like 4

Share this post


Link to post
Share on other sites

Really making nice progress now. I have normal CIO calls integrated into the driver. Basic shows a fre(0) of 30074 with the driver running. Below is the basic code to have a conversation with another computer; in this case it was a small c# program running on my PC.

 

 

10 OPEN #2,12,0,"I1:T:1234:13000:192.168.2.2"
20 DIM A$(255)
30 INPUT #2,A$
40 PRINT A$
50 INPUT A$
60 PRINT #2,A$
70 GOTO 30

 

That's all it takes. Next, I'm moving on to doing TCP listens and accepts, so the other end could be an Atari as well.

  • Like 6

Share this post


Link to post
Share on other sites

Well, I got sidetracked by putting together a small telnet program. Here it is in C. It works well enough to do basic connections; through a linux telnet session you can run commands, etc. No support for any colors or cursor control or other fanciness, but it shows what the driver is capable of in a short program. I am going to convert this to BASIC and that will be the final proof of functionality for the TCP client side of things.

 

 

#include <channels.h>
#include <errors.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <conio.h>
#include "includes\command.h"
#include "includes\xio.h"
//command defines
#define DO 0xfd
#define WONT 0xfc
#define WILL 0xfb
#define DONT 0xfe
#define CMD 0xff
#define CMD_ECHO 1
#define CMD_WINDOW_SIZE 31
extern struct iocb_struct * reg;
unsigned char ret;
char iac[32];
int len;
unsigned char c;
unsigned char done=0;
unsigned char logged_on=0;
unsigned char index=0;
unsigned char naws[12] = {255, 251, 31, 255, 250, 31, 0, 40, 0, 24, 255, 240};
void read(int count);
#define MODE_CHAR 0
#define MODE_IAC  1
#define MODE_SUB  2
unsigned char *scr;
unsigned char mode=MODE_CHAR;
unsigned char cr=0x0d;
unsigned char echo=0;
void showtick(char col)
{
  unsigned char sb;
  scr=(unsigned char *)(*((unsigned int *)88));
  sb=*(scr+col);
  sb++;
  *(scr+col)=sb;
}
void print_ip(char *message, unsigned char ip[4])
{
  cprintf("%s %i.%i.%i.%i\r\n",message, ip[0],ip[1],ip[2],ip[3]);
}
void open(char *ip, char *port)
{
  sprintf(iac,"I1:T:0:%s:%s",port,ip);
  XIO_CLOSE(2);
  printf("connecting to %s...\n",iac);
  ret=XIO_OPEN(2,12,0,iac);
  if ( ret != 1)
  {
    printf("open failed with error: %i\n%s\n",ret,aip_errmsg(ret));
    XIO_CLOSE(2);
    exit(1);
  }
}
void _do()
{
  if (iac[2] == CMD_WINDOW_SIZE)
  {
    XIO_PUT(2,&naws[0],12);
    return;
  }
  iac[1]=WONT;
  XIO_PUT(2,&iac[0],3);
}
void _dont()
{
}
void _will()
{
  iac[1]=DONT;
  XIO_PUT(2,&iac[0],3);
}
void _wont()
{
}
void process_IAC()
{
  switch(iac[1])
  {
    case DO:
    {
      _do();
      break;
    }
    case DONT:
      {
        _dont();
        break;
      }
    case WILL:
      {
        _will();
        break;
      }
    case WONT:
      {
        _wont();
        break;
      }
    default:
      {
        printf("unhandled : %x %x %x\n",iac[0],iac[1],iac[2]);
        break;
      }
  }
}
char intranslate(char ch)
{
  if (ch == 0x0A)
  {
    return 0x9B;
  }
  if (ch==0x7E)
  {
    return 0x20;
  }
  //if ( (ch < 32 ) || (ch > 122))
  //{
  //  return '.';
  //}
  return ch;
}
char outtranslate(char ch)
{
  if ( ch == 0x9B )
  {
    return 0x0A;
  }
  return ch;
}

void send(char ch)
{
  if (ch==0x0a)
  {
    XIO_PUT(2,&cr,1);
  }
  XIO_PUT(2,&ch,1);
}
void emit(char ch)
{
  printf("%c",ch);
}
void process()
{
  while ( !done )
  {
    //wait for a character
    ret=xio(AIP_DATA_AVAILABLE,2,12,0,(void *)&len,0);
    if ( ret != 1) break;
    len=*((int *)&reg->ICAX5);
    if ( len > 0 )
    {
      //read 1 char
      ret=XIO_GET(2,&c,1);
      if ( ret != 1) break;
      switch (mode)
      {
        case MODE_IAC:
        {
          iac[index++]=c;
          if ( index==3 )
          {
            process_IAC();
            mode=MODE_CHAR;
          }
          break;
        }
        case MODE_SUB:
          {
            break;
          }
        case MODE_CHAR:
          {
            if (c==255)
            {
              mode=MODE_IAC;
              index=0;
              iac[index++]=c;
            }else{
              emit(intranslate(c));
            }
            break;
          }
      }
    }
    if (kbhit())
    {
      c=cgetc();
      send(outtranslate(c));
      if ( echo )
      {
        emit(c);
      }
    }
  }
}
int main(int argc,char *argv[])
{
  struct aip_struct *channel=0;
  unsigned int timeout=100;
  reg=get_iocb(2);
  if ( argc < 3 )
  {
    xio_print(0,"call telnet with: telnet <ip> <port> <echo|noecho>\n",255);
    return 1;
  }
  if ( argc ==4 )
  {
    if ((argv[3][0]=='e') || (argv[3][0]=='E'))
    {
      echo=1;
    }
    if ((argv[3][0]=='n') || (argv[3][0]=='N'))
    {
      echo=0;
    }
  }
  open(argv[1],argv[2]);
  //open("192.168.2.21","13000");
  memset(iac,0,32);
  process();
  XIO_CLOSE(2);
  return 0;
}

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...