|
Post by Robert on Jan 14, 2024 10:04:41 GMT -5
I used a simpler macro, and tested it on itself (on the test.macro file) with BNDS MAX on the status bar:
' test.macro halt ("Get_Rbound", Get_Rbound) When I run this, I get a message of "Get_Rbound 31", and the right ")" on the HALT statement in is column 31.
As Stefan noted, Help for this function says, "If the right bounds is MAX, 0 (zero) is returned" but that's not what happens.
R
|
|
|
Post by Robert on Jan 13, 2024 23:02:24 GMT -5
George, as you know all too well, much of your time is spent debugging - in particular, debugging edit scenarios we encounter as users. Oftentimes, we describe only generally the steps that we took to encounter a particular problem. But, sometimes we remember the steps inaccurately. It can be hard to meticulously test something one step at a time, and write down somewhere (on paper, in a Notepad window, etc.) what we did at each an every step. Those inevitable inaccuracies only make your job harder.
The idea here is to implement a "state" in which the editor runs in "command log mode". Here is the basic outline:
1. A new keyboard primitive of (CmdLog) would be made.
2. (CmdLog) acts to toggle logging mode on or off. By default, command logging is off. When logging mode is on, all commands issued by the user are written to a logging file. A possible naming convention might be:
SpfCmdLog.yyyymmdd.hhmm.txt
3. When logging mode is off, and (CmdLog) is issued, a new command log file is created, in the main SPFLite home directory, with a name that has a date/time stamp as part of the name.
4. Every command issued by the user, whether primary, line or keyboard primitive, would be written to the log file, when logging is on.
5. A possible logging format might be:
hhmmss nnnnn t command
where hhmmss is the time the command as issued nnnnn is a sequence number, starting at 00001 t is a code indicating the command type: P/L/K for primary, line or keyboard primitive command is the command that was issued
6. When command logging is in effect, every effort is made to ensure that the logging file survives a crash. To do that:
(a) every write to the logging file is "flushed" to ensure it it written to disk every time.
(b) the logging line is written before the command is actually issued.
(c) to provide an indication if the command succeeded, the logging line would be written without a CRLF first. Then, when the command finished, a "closure" mark would be written. For laughs, let's say the closure mark was "[ok]" + $CRLF. That closure mark would be (separately) written and flushed to the log file. So there would be evidence whether the last command logged to the file actually finished.
In the event of a crash, a user would send you the command log file, to help you reproduce the steps that led up to the crash event.
Why do this, when there is a command trace in the crash file? Because the crash file has a limited number of entries. Unless the cause of the crash involved only a few steps, important steps leading up to the crash would fall off the trace record and be lost.
Obviously, the hardest part of this idea is finding convenient points to capture each command, just before and after it is executed. The better job you can to to log things, the more useful the log would be. Ideally, if you can find really good places to insert the logging calls, it would minimize the code impact of this feature. And, perhaps the logging doesn't have to perfectly detect absolutely everything, but mainly what is important.
As this is simply an idea, I don't claim that every possible nuance has been considered here. That would come if/when you chose to implement it.
Comments invited.
R
|
|
|
Post by Robert on Jan 13, 2024 22:03:48 GMT -5
Just tried beta 24.012. Again got UNDO file names invalid message. "Bad Fn" displays as null string.
Sigh.
===> More information. I continued editing the Clip file, ignoring the error message. I inserted a blank line and deleted it, just to force the error message again. This time, the "Bad Fn" was NOT blank. Instead, it showed as "BYELORUSSIAN-UKRAINIAN I". This is the name of a Cyrillic letter, and appears on the first screen of the edit display. It thus appears that the main data is corrupting the UNDO file name area. If you are interested in this example, let me know and I can try to ship you the data and an exact editing sequence to cause this.
P.S. I saved the Clip session, closed it and then reopened it. I no longer got the error messages.
R
|
|
|
Post by Robert on Jan 11, 2024 13:20:59 GMT -5
George, re. my wild guess of 3 days ago: "Mueh, you may have provided the missing link, which is the number of lines. Perhaps George allocated a fixed number of undo lines for this, assuming no one would need more than (say) 9999 of them, and then that fixed array was overrun? At least it's now a possible line of inquiry on it."
Oh boy, I'd say, time to reconsider any other "initial allocations" ...
R
|
|
|
Post by Robert on Jan 11, 2024 11:45:38 GMT -5
George, I think Mueh still has the old FCLIP macro that has (EraseEOF) instead of (EraseEOL). Not sure if that matters.
R
|
|
|
Post by Robert on Jan 10, 2024 18:29:53 GMT -5
You know, this issue about the UNDO file name being corrupted HAS been there, and we HAVE seen memory corruption errors, attributable or not, for some time. But, it's been so sporadic, you've never been able to pin it down.
I sure hope this GET$$ thing is the answer. That would be a huge relief.
R
|
|
|
Post by Robert on Jan 10, 2024 18:27:32 GMT -5
It seems like the SETUNDO level of the *COPIED* file has no bearing on the file you are copying it INTO.
So, after the copy, does an UNDO snapshot for the CLIP Edit session get taken? Yes, it should.
R
|
|
|
Post by Robert on Jan 10, 2024 16:27:32 GMT -5
George, you might benefit from doing a scan for GET$$ in your code, if you haven't done so already.
As for CLIP vs. a profile, if a COPY command is issued in a Clip edit session, do you apply the Profile SOURCE when deciding how to copy the data? Seems like you'd have to, because if the data was UTF-8 or UTF-16, you'd have to translate it to ANSI before the data was copied in.
R
|
|
|
Post by Robert on Jan 10, 2024 13:31:36 GMT -5
WOW, that sure looks like you found it. LOF always returns a byte count, but GET$ takes an ANSI char count and GET$$ takes a WIDE char count. I suspect the first half had the real data and the second half had FFFF because there was no more data to assign to the bufw array. It has to put SOMETHING in it, and PB chose that, because FFFF is not a meaningful Unicode character.
I am puzzled as to why this data is Unicode in the first place. What is going on here? I thought internally all data was ANSI. Could you explain this?
I am also wondering about the PARSE call. Parsing thousands of lines ending in CRLF and allocating many, many strings could be very time consuming. Aren't there 'string builder' calls that would run faster?
R
|
|
|
Post by Robert on Jan 9, 2024 16:39:03 GMT -5
Perhaps file size is only indirectly related. Could it be that the larger a file is, the more time it takes, and so it's really the size of the workload and how long it takes to finish? And not some arbitrary line count? Maybe MINLEN acts to make lines longer, so that makes the I/O time longer? I don't use MINLEN, but I still have gotten issues.
I believe the underlying issue is time, not size. It's probably system dependent, too, so a fast machine with only SSD's has fewer of these hiccups. Just kicking ideas around ...
R
|
|
|
Post by Robert on Jan 8, 2024 19:17:48 GMT -5
Right, I know that DELETE ALL XXX in a large file is slow, but what it's doing now is odd, where the command disappears first, then a delay, then the command finishes. I think in some ways, what you had originally, with some kind of semaphore, might have actually been (mostly) correct. Maybe what's needed is a test to make sure the subtask's work is actually done before simply assuming it is. Perhaps some kind of completion flag.
Idea:
COMPLETED = FALSE START_TASK() LOOP WAIT_TASK(250_MS) IF COMPLETED THEN EXIT_LOOP END LOOP Is this possible? Would it help?
R
==> Oops, I see it wouldn't help. If you were in a wait loop, the main process still would be waiting on the subtask to finish. My bad, this won't fix it. Sigh.
|
|
|
Post by Robert on Jan 8, 2024 13:56:46 GMT -5
George, I just tested the 24.008 beta. I did my FCLIP macro against a file of 14,000 lines. Observations:
1. No crash (so far)
2. All primary commands run much slower. If I issue a command like DELETE "ABC" ALL, the command first disappears, then there is about a 2 second wait, then the delete is performed. This is new behavior.
3. I issued a Power Type command on this file (it's a CLIP session), and when I was done with it, I got the message again, The Files name(s) in the UNDO control area do not appear to be valid, Bad Fn: <nothing>, Current UNDO SAVE function will be skipped.
R
|
|
|
Post by Robert on Jan 8, 2024 12:08:35 GMT -5
Mueh, you may have provided the missing link, which is the number of lines. Perhaps George allocated a fixed number of undo lines for this, assuming no one would need more than (say) 9999 of them, and then that fixed array was overrun? At least it's now a possible line of inquiry on it.
R
|
|
|
Post by Robert on Jan 7, 2024 19:08:42 GMT -5
Benjamin, if you have not defined and enabled tabs, then BackTab has no other "stopping point" other than the beginning of the previous line. That is normal behavior.
R
|
|
|
Post by Robert on Jan 7, 2024 13:30:18 GMT -5
Oh yeah, you guys in Europe have the 220/230V power, in the US it's 110/115V. I wonder why we are 1/2 what you are. But even here, 55V would be a really bad brownout. Check your household appliances. Brownouts cause circuits to draw more power to make up for the lack of voltage (due to electrical theory and Ohm's law), and stuff like motors can burn out when they draw too much power.
Be careful with that kitesurfing. That's a young man's sport.
R
|
|