Had a quick look. Seems like an interesting tool. I know there are many companies in Europe that, for regulations and compliance reasons, cannot have their data leave Europe. As far as I can see from the documentation, the data is stored in the US. Would be nice to have an option to store the data in Europe. Or not having to upload any data at all. But instead have the option export an artifact that can be processed later. But perhaps that defeats the business.
I am building a version of this that lets you store the data on your computer, you can export it and send it to your team (manually) so you can use any data hosting you want. Let me know if you're interested!
I'd also be interested, at least for internal bug reports and such. From what I can see, jam.dev is collecting what I collect for dev-teams anyhow, just faster and more comprehensive.
But this would possibly move customer data in requests and screenshots out of our control, and the entire hassle of adding subcontractors and managing order data processing agreements have taken quite a few good monitoring and debugging tools out of discussions, sadly.
Yeah, I've been using `docker/setup-qemu-action` as well to run on non-x86-64 architectures for Linux [1]. But since it's Docker it's Linux only and doesn't support other operating systems.
GitHub has Apple Silicon on the roadmap, but IIRC it was at the end of this year. They already support Apple Silicon for self hosted runners [2].
You can use this GitHub action [1] to get SSH access to a GitHub runner. If you need access to the GUI you should be able to use ngrok, perhaps with this GitHub action [2]. I've tried tmate both with Linux and macOS runners. I've only tried ngrok with Linux runners.
CircleCI supports macOS runners and has native support for rerunning failed builds with SSH access. So just setup a job that always fails.
The system linker strips debug info from the executable. The way the debugger finds debug info is that the debug info is available in the object files. A reference to the object files is added to the executable. The debugger can find the object files and read the debug info from the object files. I have no experience with Rust but in opinion this should be the default behavior for debug builds, no need to generate a dSYM file. The dSYM file is used if you want to ship debug info with your final release build to customers. There's no need for dSYM during regular development workflows.
It's possible to get the linker to keep the debug info by tweaking the section attributes. By default all debug info related sections have the `S_ATTR_DEBUG` flag. If that is replaced with the `S_REGULAR` flag the linker will keep those sections. The DMD D compiler does this for the `__debug_line` section [1][2][3]. This allows for uncaught exceptions to print a stack trace with filenames and line numbers. Of course, DMD uses a custom backend so this change was easy. Rust which relies on LLVM would probably need a fork.
> There's no need for dSYM during regular development workflows.
I'm not sure that's true. I was not getting line numbers in backtraces in a Rust program I developed because I copied the executable to another directory. I had to add a symlink to the dSYM directory to make it work.
If you want crazy fast build times, then D is your best bet [1]. D's compile times are faster than most other languages (I'm not counting Vox, it's too experimental).
This is very common in D and D doesn't not have any macro system. It uses regular syntax (more or less). D was doing this way before Zig existed and before C++ had constexpr. Simple example of platform specific members:
struct Socket
{
version (Posix)
int handle;
else version (Windows)
SOCKET handle;
else
static assert(false, "Unsupported platform");
}
Another example is the checked numeric type [1] in the D standard library. It takes two types as parameters. The first being the underlying type and the second being a "hook" type. The hook type allows the user to decide what should happen in various error conditions, like overflow, divide by zero and so on. The implementation is written in a Design By Introspection style and inspects the hook type and adopts its implementation depending on what hooks it provides. If the hook type implements the hook for overflow, that hook will be executed, otherwise it falls back to some default behavior.
In D, all semantic analysis is performed eagerly (I think), except for templates. D also supports CTFE (Compile Time Function Evaluation), `static if`, `static assert` and a few other language constructs that are evaluated at compile time.
I experimented a bit at how D's documentation generator behaves using these language constructs. Here's a snippet of some D code:
/// some struct description
struct Foo(T) // template
{
/// some alias description
alias Result = int;
/// some method description
Result foo()
{
string a = 3; // this does not normally compile
}
static if (is(T == int)) // evaluated at compile time
{
/// some method description 2
void bar() {}
}
version (Windows)
{
/// some method description for Windows
void bazWindows() {}
}
else version (Posix)
{
/// some method description for Posix
void bazPosix() {}
}
else
static assert(false, "Unsupported platform"); // evaluated at compile time
}
When generating documentation for the above code, and `Foo` has not been instantiated, the generated docs will include `Foo`, `Result`, `foo`, `bar`, and `bazWindows`. This is regardless of platform. The return type of `foo` will be `Result` and not `int`. This clearly shows that the D compiler doesn't perform semantic analysis when generating documentation. When doing a regular compilation and `Foo` is instantiated, `bar` will only be included if `T` is an `int`. `bazWindows` will only be compiled on Windows and `bazPosix` will only be compiled on Posix platforms.
Looking at the implementation, the compiler will generate the docs after semantic analysis and only if there are no errors. But, if `Foo` is never instantiated no errors have occurred so it will continue to generate the docs.
On the other hand, if `Foo` is instantiated (and compiles) the compiler will generate docs for the AST after semantic analysis has been performed and `bazWindows` will only be included if the docs were generated on Windows and `bazPosix` will only be included on Posix platforms. What's weird though, is that it seems `bar` will be included regardless of what type `T` is.