Author: 0znf0qyca6f9

  • hx-delaunator

    Haxe port of an incredibly fast delaunator JavaScript library for
    Delaunay triangulation of 2D points.

    Performance Table

    Benchmark results of Haxe cross-compilation against original Delaunay JS library (commit 4e6ecd4)

      uniform 100k gauss 100k grid 100k degen 100k uniform 1 million gauss 1 million grid 1 million degen 1 million
    Original Lib 79ms 74ms 80ms 33ms 1.16s 1.17ms 1.02ms 0.34s
    Haxe JS 69ms 65ms 67ms 37ms 1.06s 1.05s 0.93s 0.68s
    Haxe C++ 72ms 73ms 139ms 94ms 1.10s 1.06s 0.40s 0.62s
    Haxe C# 80ms 79ms 71ms 56ms 1.16s 1.15s 0.95s 0.83s
    Haxe Java 118ms 76ms 66ms 42ms 1.55s 1.41s 1.15s 0.94s
    HashLink C 94ms 95s 86ms 69ms 1.38s 1.32s 1.16s 1.15s
    HashLink JIT 203ms 197ms 207ms 146ms 2.63s 2.74s 2.52s 2.63s

    Performance Comparsion Chart

    100k
    1m

    Keep in mind

    All these benchmark results depend on the hardware and moon phases.
    This comparison was done just for fun and all of these Haxe targets including the original library are incredibly fast enough in the real world.

    Visit original content creator repository
    https://github.com/posxposy/hx-delaunator

  • progpjs

    ProgpJS v2

    Introduction

    What is ProgpJS?

    ProgpJS is a fast javascript engine for the Go language, using V8 as backend.

    ProgpJS is “fast and fast”: fast to execute and fast to develop with, thanks to a code generator handling
    the technical stuffs for you. You write a simple Go function, say under what name it must be exposed to javascript
    and it’s all!

    Benchmarks show that it’s way faster than Node.js and on par with BunJS and DenoJS. But what is great with ProgpJS
    isn’t only his speed, but his capacity to easily mix Go code, C++ code and Javascript code, while been very simple to use.
    With ProgpJS there isn’t technical complexity, nothing, ProgpJS takes in charge all the difficulties for you!

    ProgpJS comes with a Node.js compatibility layer for projects needing it. It’s in the first stage, but it can be
    useful for those needing it. The goal of ProgpJS isn’t to be compatible with Node.js, mainly because ProgpJS goal
    is to make interact javascript and Go, it’s not a Node.js replacement. With Node.js your project is 100% javascript
    while with ProgpJS you code your tools in Go (for his high speed) and you use your components through javascript.

    More about ProgpJS

    You can visit ProgpJS website in order to known more about the project.
    https://progpjs.dev

    Visit original content creator repository
    https://github.com/progpjs/progpjs

  • prodos-drivers

    ProDOS Drivers

    build

    Build with ca65

    What are ProDOS “drivers”?

    The ProDOS operating system for the Apple II executes the first .SYSTEM file found in the boot directory on startup. A common pattern is to have the boot directory contain several “driver” files that customize ProDOS by installing drivers for hardware or modify specific parts of the operating system. These include:

    • Real-time Clock drivers (e.g. No-Slot Clock, The Cricket!, AE DClock, etc)
      • In ProDOS 1.x, 2.0 and 2.4 the Thunderclock driver is built-in.
    • RAM Disk drivers (e.g. RamWorks)
      • In ProDOS 1.x, 2.0 and 2.4 only a 64K driver for /RAM is built-in.
    • Quit dispatcher/selector (BYE routines)
      • In ProDOS 1.0 and later, a 40-column friendly selector prompts for a prefix then a path ENTER PREFIX (PRESS "RETURN" TO ACCEPT)
      • In ProDOS 1.9 and 2.0.x, on 80-column systems, a menu-driven selector is installed instead.
      • In ProDOS 2.4.x Bitsy Bye is built-in.

    Early versions of these drivers would often invoke a specific file on completion, sometimes user-configurable. The best versions of these drivers simply execute the following .SYSTEM file, although this is non-trivial code and often did not work with network drives.

    This repository collects several drivers and uses common code to chain to the next .SYSTEM file, supporting network drives.

    What is present here?

    This repo includes the following drivers/modifications:

    • Real-time Clock drivers
      • No-Slot Clock
      • The Cricket!
      • Applied Engineering DClock
      • ROMX Real-Time Clock
      • FujiNet Clock
      • A “jumbo” driver that includes all of the above (just called CLOCK.SYSTEM)
    • Accelerators
      • ZIP CHIP configuration (slow on speaker access, make slots 1-4 fast)
    • RAM Disk drivers
      • RamWorks Driver by Glen E. Bredon
    • Quit dispatcher/selector (BYE routines)
      • 40-column Selector (from ProDOS)
      • 80-column menu-driven Selector (from ProDOS 1.9 and 2.x)
      • Bird’s Better Bye (a 40-column menu-driven selector)
      • Buh-Bye (an enhanced version of the ProDOS 80-column, menu-driven selector)
    • Text color themes
      • These set the IIgs (or VidHD) text/background/border colors

    In addition, QUIT.SYSTEM is present which isn’t a driver but which immediately invokes the QUIT handler (a.k.a. program selector). This will happen automatically if the last driver can’t find another .SYSTEM file, but QUIT.SYSTEM can be used to stop the chain if you have other .SYSTEM files in your root directory.

    If you don’t have a real-time clock, NOCLOCK.SYSTEM will prompt you for the date/time on boot and set the ProDOS date/time, which will be used to record file creation/modification times.

    There’s also PAUSE.SYSTEM which just waits for a fraction of a second before invoking the next driver file in case the log messages from the other installers goes by too fast for your taste, and HOME.SYSTEM in case you want the log messages to start off with a blank screen.

    Non-drivers that are included:

    • The DATE binary file can be BRUN (or just -DATE) to show the current ProDOS Date/Time, to verify that the clock driver is working.
    • Some utilities for The Cricket! clock are also included.

    How do you use these?

    The intent is that you use a tool like Copy II Plus or Apple II DeskTop to copy and arrange the SYSTEM files on your boot disk as you see fit. A boot disk image catalog that is used on multiple different hardware configurations might include:

    • PRODOS – the operating system, e.g. ProDOS 2.4
    • HOME.SYSTEM – start off with a blank screen
    • NS.CLOCK.SYSTEM – install No-Slot clock driver, if present
    • ROMXRTC.SYSTEM – install ROMX clock driver, if present
    • FN.CLOCK.SYSTEM – install FujiNet clock driver, if present
    • DCLOCK.SYSTEM – install DClock clock driver, if present
    • CRICKET.SYSTEM – install The Cricket! clock driver, if present
    • ZIPCHIP.SYSTEM – slow the ZIP CHIP on speaker access, if present
    • RAM.DRV.SYSTEM – install RamWorks RAM disk driver, if present
    • BUHBYE.SYSTEM – install a customized Quit handler to replace the built-in one
    • PAUSE.SYSTEM – pause for a moment, so that you can inspect the output of the above
    • QUIT.SYSTEM – invoke the Quit handler immediately, as a program selector
    • BASIC.SYSTEM – which will not be automatically invoked, but is available to manually invoke

    Alternately, you might want to install some drivers then immediately launch into BASIC. In that case, put BASIC.SYSTEM after the drivers in place of QUIT.SYSTEM.

    Alternate Approach

    If you want to keep your volume directory tidier, consider using SETUP.SYSTEM instead.

    Building

    Fetch, build, and install cc65:

    git clone https://github.com/cc65/cc65
    make -C cc65 && make -C cc65 avail
    

    Fetch and build this repo:

    git clone https://github.com/a2stuff/prodos-drivers
    cd prodos-drivers
    make
    

    To make a disk image, fetch, build and install Cadius:

    git clone https://github.com/mach-kernel/cadius
    make -C cadius && make -C cadius install
    

    Then you can:

    cd prodos-drivers
    make && make package
    

    This will produce prodos-drivers.po, a disk image for use with emulators or tools like ADTPro.

    Notes:

    • Specify LOG_SUCCESS=0 and/or LOG_FAILURE=0 (e.g. make LOG_SUCCESS=0) to build with driver success and/or error logging suppressed.
    Visit original content creator repository https://github.com/a2stuff/prodos-drivers
  • appveyor-ci

    appveyor-ci: Tools for Using Conda in AppVeyor CI

    This collection of scripts for AppVeyor CI can be used with the following appveyor.yml file:

    platform:
      - x86
      - x64
    
    environment:
      matrix:
        # Add here environement variables to control the AppVeyor CI build
    
    install:
      - git clone https://github.com/StatisKit/appveyor-ci.git appveyor-ci
      - cd appveyor-ci
      - call install.bat
    
    before_build:
      - call before_build.bat
    
    build_script:
      - call build_script.bat
    
    after_build:
      - call after_build.bat
    
    deploy:
      provider: Script
      on:
        branch: master
    
    before_deploy:
      - call before_deploy.bat
    
    deploy_script:
      - call deploy_script.bat
    
    after_deploy:
      - call after_deploy.bat
    
    on_success:
      - call on_success.bat
    
    on_failure:
      - call on_failure.bat
    
    on_finish:
      - call on_finish.bat

    In the matrix section of the environment section, you can use the following environement variables to control the Appveyor CI build:

    • CONDA_VERSION equal to 2 (default) or 3.
      Control the Conda version used for the build.

    If you want to:

    • Build a Conda recipe, you should define these environment variables:
      • CONDA_RECIPE.
        The path to the Conda recipe to build.
        This path must be relative to the repository root.
      • ANACONDA_LOGIN (optional).
        The usename used to connect to the Anaconda Cloud in order to upload the Conda recipe built.
      • ANACONDA_PASSWORD (optional).
        The usename’s password used to connect to the Anaconda Cloud in order to upload the Conda recipe built.
      • ANACONDA_OWNER (optional).
        The channel used to upload the Conda recipe built.
        If not given, it is set to the ANACONDA_LOGIN value.
      • ANACONDA_DEPLOY (optional).
        Deployment into the Anaconda Cloud.
        If set to True (default if ANACONDA_LOGIN is provided), the Conda recipe built will be deployed in the Anaconda Cloud.
        If set to False (default if ANACONDA_LOGIN is not provided), the Conda recipe built will not be deployed in the Anaconda Cloud.
      • ANACONDA_LABEL equal to main by default.
        Label to associate to the Conda recipe deployed in the Anaconda Cloud.
    • Run a Jupyter notebook, you should define these environment variables:
      • JUPYTER_NOTEBOOK.
        The path to the Jupyter notbook to run.
        This path must be relative to the repository root.
      • CONDA_ENVIRONMENT.
        The path to the Conda environment to use when runnning the Jupyter notebook.

    Note

    It is recommanded to define the environment variables ANACONDA_LOGIN (resp. DOCKER_LOGIN), ANACONDA_PASSWORD (resp. DOCKER_PASSWORD) and ANACONDA_OWNER (resp. DOCKER_OWNER) in the Settings pannel of Travis CI instead of in the appveyor.yml.

    Visit original content creator repository
    https://github.com/StatisKit/appveyor-ci

  • appveyor-ci

    appveyor-ci: Tools for Using Conda in AppVeyor CI

    This collection of scripts for AppVeyor CI can be used with the following appveyor.yml file:

    platform:
      - x86
      - x64
    
    environment:
      matrix:
        # Add here environement variables to control the AppVeyor CI build
    
    install:
      - git clone https://github.com/StatisKit/appveyor-ci.git appveyor-ci
      - cd appveyor-ci
      - call install.bat
    
    before_build:
      - call before_build.bat
    
    build_script:
      - call build_script.bat
    
    after_build:
      - call after_build.bat
    
    deploy:
      provider: Script
      on:
        branch: master
    
    before_deploy:
      - call before_deploy.bat
    
    deploy_script:
      - call deploy_script.bat
    
    after_deploy:
      - call after_deploy.bat
    
    on_success:
      - call on_success.bat
    
    on_failure:
      - call on_failure.bat
    
    on_finish:
      - call on_finish.bat

    In the matrix section of the environment section, you can use the following environement variables to control the Appveyor CI build:

    • CONDA_VERSION equal to 2 (default) or 3.
      Control the Conda version used for the build.

    If you want to:

    • Build a Conda recipe, you should define these environment variables:
      • CONDA_RECIPE.
        The path to the Conda recipe to build.
        This path must be relative to the repository root.
      • ANACONDA_LOGIN (optional).
        The usename used to connect to the Anaconda Cloud in order to upload the Conda recipe built.
      • ANACONDA_PASSWORD (optional).
        The usename’s password used to connect to the Anaconda Cloud in order to upload the Conda recipe built.
      • ANACONDA_OWNER (optional).
        The channel used to upload the Conda recipe built.
        If not given, it is set to the ANACONDA_LOGIN value.
      • ANACONDA_DEPLOY (optional).
        Deployment into the Anaconda Cloud.
        If set to True (default if ANACONDA_LOGIN is provided), the Conda recipe built will be deployed in the Anaconda Cloud.
        If set to False (default if ANACONDA_LOGIN is not provided), the Conda recipe built will not be deployed in the Anaconda Cloud.
      • ANACONDA_LABEL equal to main by default.
        Label to associate to the Conda recipe deployed in the Anaconda Cloud.
    • Run a Jupyter notebook, you should define these environment variables:
      • JUPYTER_NOTEBOOK.
        The path to the Jupyter notbook to run.
        This path must be relative to the repository root.
      • CONDA_ENVIRONMENT.
        The path to the Conda environment to use when runnning the Jupyter notebook.

    Note

    It is recommanded to define the environment variables ANACONDA_LOGIN (resp. DOCKER_LOGIN), ANACONDA_PASSWORD (resp. DOCKER_PASSWORD) and ANACONDA_OWNER (resp. DOCKER_OWNER) in the Settings pannel of Travis CI instead of in the appveyor.yml.

    Visit original content creator repository
    https://github.com/StatisKit/appveyor-ci

  • pcm

    Profile Classification Modelling (PCM)

    DOI

    license

    Profile Classification Modelling is a scientific analysis approach based on vertical profiles classification that can be used in a variety of oceanographic problems (front detection, water mass identification, natural region contouring, reference profile selection for validation, etc …).
    It is being developed at Ifremer/LOPS in collaboration with IMT Atlantique since 2015, and has become mature enough (with publication and communications) to be distributed and made publicly available for continuous improvements with a community development.

    Ocean dynamics and its 3-dimensional structure and variability is so complex that it is very difficult to develop objective and efficient diagnostics of horizontally and vertically coherent oceanic patterns. However, identifying such patterns is crucial to the understanding of interior mechanisms as, for instance, the integrand giving rise to Global Ocean Indicators (e.g. heat content and sea level rise). We believe that, by using state of the art machine learning algorithms and by building on the increasing availability of ever-larger in situ and numerical model datasets, we can address this challenge in a way that was simply not possible a few years ago. Following this approach, Profile Classification Modelling focuses on the smart identification of vertically coherent patterns and their spatial distribution.

    References:

    Python package

    Python 2.7

    We are currently developing a Python package to work with PCM easily.
    You can check out the first release on the pyXpcm homepage.
    Otherwise you can still look at a classic PCM workflow on this notebook.

    Matlab toolbox

    DOI This is the original code for PCM used in Maze et al (2017).

    You can get started with the Matlab toolobx following this wiki page

    Note that @gmaze will provide help to use it, but won’t maintain the code. Contact us if you want to contribute !

    Visit original content creator repository https://github.com/obidam/pcm
  • rcpptoml

    RcppTOML: Rcpp bindings for TOML

    CI LicenseCRAN Dependencies Downloads Last Commit

    What is TOML?

    TOML is a configuration file grammar for humans. It is easier to read and edit than the alternatives yet arguably more useful as it is strongly typed: values come back as integer, double, (multiline-) character (strings), boolean or Datetime. Moreover, complex nesting and arrays are supported as well.

    For several years, this package used the C++11 library cpptoml written by Chase Geigle. However, as that library is no longer maintained, current versions now use the newer C++17 library toml++ by Mark Gillard.

    Example

    Consider the following TOML input example input:

    # This is a TOML document.
    
    title = "TOML Example"
    
    [owner]
    name = "Tom Preston-Werner"
    dob = 1979-05-27T07:32:00-08:00 # First class dates
    
    [database]
    server = "192.168.1.1"
    ports = [ 8001, 8001, 8002 ]
    connection_max = 5000
    enabled = true
    
    [servers]
    
      # Indentation (tabs and/or spaces) is allowed but not required
      [servers.alpha]
      ip = "10.0.0.1"
      dc = "eqdc10"
    
      [servers.beta]
      ip = "10.0.0.2"
      dc = "eqdc10"
    
    [clients]
    data = [ ["gamma", "delta"], [1, 2] ]
    
    # Line breaks are OK when inside arrays
    hosts = [
      "alpha",
      "omega"
    ]

    It can be read in one statement and once parsed, R now has properly typed input as shown in default print method:

    R> library(RcppTOML)
    R> parseTOML("inst/toml/example.toml")
    List of 5
     $ clients :List of 2
      ..$ data :List of 2
      .. ..$ : chr [1:2] "gamma" "delta"
      .. ..$ : int [1:2] 1 2
      ..$ hosts: chr [1:2] "alpha" "omega"
     $ database:List of 4
      ..$ connection_max: int 5000
      ..$ enabled       : logi TRUE
      ..$ ports         : int [1:3] 8001 8001 8002
      ..$ server        : chr "192.168.1.1"
     $ owner   :List of 2
      ..$ dob : POSIXct[1:1], format: "1979-05-27 15:32:00"
      ..$ name: chr "Tom Preston-Werner"
     $ servers :List of 2
      ..$ alpha:List of 2
      .. ..$ dc: chr "eqdc10"
      .. ..$ ip: chr "10.0.0.1"
      ..$ beta :List of 2
      .. ..$ dc: chr "eqdc10"
      .. ..$ ip: chr "10.0.0.2"
     $ title   : chr "TOML Example"
    R>

    See the other examples and the upstream documentation for more. Also note that most decent editors have proper TOML support which makes editing and previewing a breeze:

    Installation

    Installation from source requires a C++17 compiler, and g++ versions 8 and onward should suffice.

    From CRAN

    The package is on CRAN and can be installed from every mirror via

    install.packages("RcppTOML")
    

    From the ghrr-drat

    Development releases may be provided by the ghrr repository which can accessed via

    ## if needed, first do:  install.packages("drat")
    drat::addRepo("ghrr")
    

    after which install.packages("RcppTOML) which access this repo.

    Alternatively, set the repo information on the fly as e.g. in

    repos <- c("https://ghrr.github.io/drat", "https://cloud.r-project.org")
    install.packages("RcppTOML", repos=repos)

    which points to the ghrr repository as well as a standard CRAN mirror, but just for the length of this installation step.

    Status

    Earlier versions relied upon cpptoml and were feature-complete with TOML v0.5.0 (see the tests/ directory). They already parsed everything that the underlying cpptoml parsed with the same (sole) exception of unicode escape characters in strings.

    Since switching to toml++ the package takes advantage of its comprehensive TOML v1.0.0 support and should now be fully 1.0.0 compliant. Some new tests were added to demonstrate this.

    As toml++ also offers export to JSON and YAML as well as TOML writing, we may add support to some of these features going forward.

    Continued Testing

    As we rely on the tinytest package, the already-installed package can also be verified via

    tinytest::test_package("RcppTOML")

    at any point in time.

    Author

    Dirk Eddelbuettel

    License

    GPL (>= 2)

    Visit original content creator repository https://github.com/eddelbuettel/rcpptoml
  • etl-airflow-redshift

    Data Pipelines with Airflow and AWS

    This project is part of the Data Engineering Nanodegree program, from Udacity. I manipulate data for a music streaming app called Sparkify, where I use Apache Airflow to introduce more automation and monitoring to their data warehouse ETL pipelines.

    I create data pipelines that are dynamic and built from reusable tasks, can be monitored, and allow easy backfills. I also implement tests against the datasets after the ETL steps have been executed to catch any discrepancies in the database.

    The source data resides in S3 and needs to be processed in a data warehouse in Amazon Redshift. The source datasets consist of JSON logs that tell about user activity in the application and JSON metadata about the songs the users listen to.

    Install

    To set up your python environment to run the code in this repository, start by
    creating a new environment with Anaconda and install the dependencies.

    $ conda create --name ngym36 python=3.6
    $ source activate ngym36
    $ pip install -r requirements.txt

    Run

    In a terminal or command window, navigate to the top-level project directory (that contains this README). You need to set up a Redshift cluster. So, start by renaming the file confs/dpipe.template.cfg to confs/dpipe.cfg and fill in the KEY and SECRET in the AWS section. Then, run the following commands:

    $ python iac.py -i
    $ python iac.py -r
    $ watch -n 15 'python iac.py -s'

    The above instructions will create the IAM role, the Redshift cluster, and check the status of this cluster every 15 seconds. Fill in the other fields from your dpipe.cfg that shows up in the commands console outputs. After Amazon finally launch your cluster, run:

    $ python iac.py -t
    $ . setup-airflow.sh
    $ python iac.py -a
    $ . start-ariflow.sh

    The first command opens a TCP port to your cluster so that you can manipulate data from outside. The second command sets your AIRFLOW_HOME to the airflow/ folder in the current project. Some errors will show up, don’t worry. The third command sets the required variables and connections, as the Redshift host address and AWS key. The last command starts the airflow UI.

    Then, navigate to http://localhost:3000 in your browser and turn on the etl_dag. It will create all the tables and insert data from S3 to the staging, dimension, and fact tables. You can click in the DAG name to follow the process execution steps. Finally, CLEAN UP your resources using the commands below:

    $ . stop-ariflow.sh
    $ python iac.py -d
    $ watch -n 15 'python iac.py -s'

    Wait for the second command to fail to find the cluster. Redshift is expensive.

    License

    The contents of this repository are covered under the MIT License.

    Visit original content creator repository
    https://github.com/ucaiado/etl-airflow-redshift

  • ft_tools_ros2

    ft_tools_ros2

    Wrench estimation and calibration of F/T sensor for ROS2 applications.

    Licence

    Build tests (humble) Build tests (jazzy) Build tests (rolling)

    The current devs are based on the jazzy ROS 2 distribution (Ubuntu 22.04 LTS)

    Author: Thibault Poignonec: tpoignonec@unistra.fr

    Warning

    This package is currently under development!

    Stack content

    F/T sensor calibration utils

    FT parameters object

    The common interface to perform sensor calibration and wrench estimation is the ft_tools::FtParameters object that is used by both ft_tools::FtCalibration and ft_tools::FtEstimation. The calibration sensor-specific parameters are:

    mass: 0.0
    sensor_frame_to_com: [0.0, 0.0, 0.0]
    force_offset: [0.0, 0.0, 0.0]
    torque_offset: [0.0, 0.0, 0.0]

    The parameters can be loaded:

    • from a yaml file:
    ft_tools::FtParameters ft_parameters
    // Load parameters from YAML file (absolute path)
    const std::string filename("...")
    bool ok = ft_parameters.from_yaml(filename);
    // or from a config folder
    bool ok = ft_parameters.from_yaml(config_filename, config_package);
    • from a msg:
    // Get calibration msg (typically from a service call)
    // i.e., ft_msgs::srv::FtCalibration ft_parameters_msg;
    
    ft_tools::FtParameters ft_parameters
    bool ok = ft_parameters.from_msg(ft_parameters_msg);

    The parameters can likewise be written to dump to a yaml file or sent as a msg.

    FT calibration

    The ft_tools::FtCalibration implements a simple sensor calibration method [1] using a least square regression (i.e., eigen SVD solver).

    We are given N sets of measurements $\left( g, f, \tau \right)$ expressed in the sensor frame of reference where $g \in \mathbb{R}^3$ is the (signed) gravity and $f, \tau \in \mathbb{R}^3$ are the raw force and torque, respectively.

    We want to retrieve the F/T sensor calibration that consists in

    • the mass $m$ in Kg
    • the center of mass $c \in \mathbb{R}^3$ in m
    • the force offset $f_0 \in \mathbb{R}^3$ in N
    • the torque offset $\tau_0 \in \mathbb{R}^3$ in N.m

    If enough measurements were provided (i.e., about 10 well-chosen robot poses), the different parameters are identified using a least square regression (i.e., eigen SVD solver) such that

    $$ f_\text{meas} = -mg + f_0 \text{ and } \tau_\text{meas} = -mc \times g + \tau_0$$

    This process returns a ft_tools::FtParameters object.

    Credits: the code is inspired by the ROS1 package force_torque_tools.

    FT estimation

    The ft_tools::FtEstimation implements a generic wrench estimator with the following features:

    • Gravity and offsets compensation such that

      • ${}^{s}f_\text{est} = {}^{s}f_\text{meas} + m{}^{s}g – {}^{s}f_0$
      • ${}^{s}\tau_\text{est} = {}^{s}\tau_\text{meas} + {}^sp_{com} \times m{}^{s}g – {}^{s}\tau_0$
    • Display force at interaction point (typically, the end-effector) such that

      • ${}^{ee}f_\text{est} = {}^{ee}R_s {}^{s}f_\text{est}$
      • ${}^{ee}\tau_\text{est} = {}^{ee}R_s \left( {}^{s}\tau_\text{est} – {}^s p_{ee} \times {}^{s}f_\text{est} \right) $
    • Apply wrench deadband on ${}^{ee}f_\text{est}$ and ${}^{ee}\tau_\text{est}$ (optional)

    • Apply low-pass filtering (200Hz cutoff by default)

    Convenience nodes

    F/T calibration node

    Topics

    • ~/joint_states (input topic) [sensor_msgs::msg::JointState]

      Joint states of the robot used to monitor the Cartesian pose of the robot.

    • ~/raw_wrench (input topic) [geometry_msgs::msg::WrenchStamped]

      Raw (can be filtered) wrench measurement by the f/t sensor, expressed in the sensor frame.

    Services

    • ~/ft_calibration_node/add_calibration_sample [std_srvs::srv::Trigger]

      Add a calibration sample using the latest robot pose and measured wrench.

      The service call will return success = false in the following cases:

      • failure to update robot pose;
      • stale wrench measurement (older than 1s);
      • wrench measurement not expressed in supported frame;
      • invalid parameters.
    • ~/ft_calibration_node/get_calibration [ft_msgs::srv::GetCalibration]

      If enough samples were collected, returns the computed calibration parameters as a ft_msgs::srv::FtCalibration msg.

      The service call will return success = false in the following cases:

      • not enough measurements;
      • failure to solve the identification problem;
      • invalid parameters.
    • ~/ft_calibration_node/save_calibration [std_srvs::srv::Trigger]

      Save the estimated parameters to a YAML file.

      The destination file is <calibration_package_share_dir>/config/<calibration_filename>. It can be tuned with the parameters:

        ft_calibration_node:
        ros__parameters:
            ...
            calibration:
            ...
            calibration_filename: calibration.yaml
            calibration_package: ft_tools
        ...
    • ~/ft_calibration_node/reset [std_srvs::srv::Trigger]

      Reset calibration procedure.

    Wrench estimation node

    Topics

    • ~/joint_states (input topic) [sensor_msgs::msg::JointState]

      Joint states of the robot used to monitor the Cartesian pose of the robot.

    • ~/raw_wrench (input topic) [geometry_msgs::msg::WrenchStamped]

      Raw (can be filtered) wrench measurement by the f/t sensor, expressed in the sensor frame.

    • ~/estimated_wrench (output topic) [geometry_msgs::msg::WrenchStamped]

      Estimated wrench expressed in the sensor frame with:

      • gravity compensation;
      • sensor offsets compensation;
      • deadband.
    • ~/interaction_wrench (output topic) [geometry_msgs::msg::WrenchStamped]

      Estimated interaction wrench at interaction_frame expressed in the reference_frame. The interaction is computed from the estimated_wrench above.

    Services

    • ~/ft_estimation_node/set_calibration [ft_msgs::srv::SetCalibration]

      Set f/t sensor calibration parameters from a ft_msgs::srv::FtCalibration msg.

    • ~/ft_estimation_node/get_calibration [ft_msgs::srv::GetCalibration]

      See ~/ft_calibration_node/get_calibration for details.

    • ~/ft_estimation_node/save_calibration [std_srvs::srv::Trigger]

      See ~/ft_calibration_node/save_calibration for details.

    • ~/ft_estimation_node/reload_calibration [std_srvs::srv::Trigger]

      Reload calibration from config YAML file.

    Basic GUI

    Although everything can be done from RQT (i.e., using the service caller), a basic GUI is provided for convenience. Note that the GUI only has limited features and is overall not very robust to mishandling…

    Perform calibration

    Image not found!

    1. Edit namespace service if needed;
    2. Click on ROS2 Connection to initialize ROS2 communication (i.e., rclpy init, register services, etc.);
    3. Click on Add calibration sample to call the service ~/ft_calibration_node/add_calibration_sample each time the robot is in position.

    Retrieve calibration parameters

    Once enough samples have been collected, go to the second tab of the GUI (see below) and click on Get calibration or Save calibration to call, respectively, the services:

    • ~/ft_calibration_node/get_calibration
    • ~/ft_calibration_node/save_calibration

    Also, the Get calibration button will retrieve the estimated calibration parameters and display them in the GUI.

    Image not found!

    Set wrench estimator calibration parameters

    TODO

    Examples

    Moveit2-assisted F/T calibration

    1. Install third party utils for ft_tools_examples
    source /opt/ros/jazzy/setup.bash
    cd <ws>/src
    vcs import . < ft_tools_ros2/ft_tools_examples/ft_tools_examples.repos
    rosdep install --ignore-src --from-paths . -y -r
    cd ..
    colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release --symlink-install
    1. Launch

    Launch robot controllers in a first terminal:

    source install/local_setup.bash
    ros2 launch ft_tools_example launch_robot_controllers.launch.py

    Launch moveit2 planning + calibration/estimation nodes:

    source install/local_setup.bash
    ros2 launch ft_tools_example launch_moveit_ft_calibration.launch.py

    N.B., if the calibration files is updated, the service ft_estimation_node/reload_calibration must be called to refresh.

    (optional) Run calibration GUI

    source install/local_setup.bash
    ros2 run ft_gui ft_calibration_gui
    1. Move robot with moveit rviz2 plugin and use GUI to perform calibration

    F/T calibration during comanipulation

    TODO

    Wrench estimation for Cartesian admittance control

    TODO

    References

    • [1] D. Kubus, T. Kroger and F. M. Wahl, “On-line rigid object recognition and pose estimation based on inertial parameters,” 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 1402-1408, doi: 10.1109/IROS.2007.4399184.

    Contacts

    icube

    ICube Laboratory, University of Strasbourg, France

    Thibault Poignonec: tpoignonec@unistra.fr, @github: tpoignonec

    Visit original content creator repository https://github.com/tpoignonec/ft_tools_ros2
  • reader

    BBS.io Reader

    Work in Progress

    This application will serve as an Email and Message Board reader for use mainly
    with BBS services. Currently working on setting up build/release automation.

    Recommended IDE Setup

    Why?

    Starting this as a project as most qwk and nntp readers generally suck and/or
    are antiquated. Thunderbird is all but completely unusable with NNTP +
    Synchronet and QWK Readers are also somewhat dated, and I found multimail didn’t
    work correctly for me out of the box.

    I want a GUI based message reader for BBS message boards with a modern UI/UX.
    I’m using Rust on the backend to learn the language, and because I think it’s a
    good choice, generally speaking. I’m using Tauri with a browser based UI because
    I’m very comfortable with browser based UI/UX and want to reduce the friction on
    the UI from what I will experience in learning the backend.

    I will make a best effort to write clear, discoverable code. This project may or
    may not follow best practices. On the backend, because I’m learning as I go.
    On the frontend, because I’m probably going to be taking shortcuts as a faster
    path to done.

    I probably won’t try to integrate into any release prospects until after I have
    an MVP of NNTP and Email (SMTP/POP) at the very least. I will be testing this
    against my own BBS, which runs Synchronet. I may also create extended support
    for some services specifically with Synchronet (ANSI User Icons and Polls).

    Longer term, after MVP, but before enhanced rendering I will make an effort to
    get updating releases into the various stores for OS use… I will likely limit
    Linux to Flathub and possibly Snapcraft, but unlikely to do any distro
    integration beyond this.

    TODO

    High level TODO list. Note: Anything before 1.0 can have breaking changes at any
    time regardless of the version.

    • Release Tracking (release-please)
    • React + MUI front end
      • BBS Configuration Entry
    • Communication to/from Rust backend
      • Save/Load BBS Configuration Entries
      • Figure out settings/data path(s)
      • SQLite in Rust
    • NNTP Group List
    • NNTP Group Subscribe
    • NNTP Fetch Headers/Bodies
    • Display Message List (flat)
      • Classic 3-pane layout
      • Groups/Forums on left
      • Message List on upper-right
      • Message+Header lower-right
    • Display Message Header
    • Display Message (flat/plain text)
    • Purge Old/Read Support
      • Database vaccuum
    • Github Release Binaries (0.5.x)
      • Windows
        • x86_64 (msi, exe)
        • x86_64 offline (include web component, large) (msi, exe)
      • Mac
        • x86_64 (dmg, .app.tgz)
        • aarch64 (dmg, .app.tgz)
      • Linux
        • x86_64 AppImage
        • x86_64 .deb
        • armv7 AppImage
        • armv7 .deb
        • aarch64 AppImage
        • aarch64 .deb
    • Automated updates (0.6.x)
    • E-Mail (smtp/pop) (0.7.x)
    • Enhanced rendering (ansi/colors, etc) (0.8.x)
    • MVP Release v1.0
      • automated updates
      • Will test in v0.9.x and Push 1.0 when working/tested
    • Store integrations
      • WinGet
      • Windows Store
      • Apple Mac Store
      • Flathub
      • Snapcraft?
    • QWK Support
    • FTP for QWK
    • Message Attachments

    License

    MIT License

    Visit original content creator repository
    https://github.com/bbs-io/reader