reor vs wink-embeddings-sg-100d
Side-by-side comparison to help you choose.
| Feature | reor | wink-embeddings-sg-100d |
|---|---|---|
| Type | Repository | Repository |
| UnfragileRank | 48/100 | 24/100 |
| Adoption | 1 | 0 |
| Quality | 0 | 0 |
| Ecosystem |
| 1 |
| 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 13 decomposed | 5 decomposed |
| Times Matched | 0 | 0 |
Reor implements semantic search by embedding note content using Transformers.js (client-side ONNX models) and storing vectors in LanceDB, a local vector database with native bindings. The system supports both pure vector similarity search and hybrid mode combining semantic matching with keyword indexing, enabling full-text discovery without cloud API calls. Search operates entirely on-device with no data transmission, using LanceDB's columnar storage for fast approximate nearest neighbor queries across note collections.
Unique: Uses Transformers.js for client-side embedding generation instead of API calls, combined with LanceDB's native bindings for platform-optimized vector storage, enabling zero-network-latency semantic search with full data privacy. Hybrid mode implementation merges vector similarity with keyword matching at query time rather than pre-computing combined scores.
vs alternatives: Faster than Pinecone/Weaviate for local use cases (no network round-trip) and more privacy-preserving than cloud vector DBs; slower than specialized FAISS implementations but with better multi-platform support and easier integration with Electron apps.
Reor automatically discovers and surfaces related notes by computing vector similarity between note embeddings and clustering semantically similar content. The system runs in the background, generating embeddings for all notes and maintaining a similarity graph that populates a sidebar panel showing related notes while editing. This creates a knowledge graph without requiring manual wiki-style link syntax, using the same embedding infrastructure as semantic search to identify conceptual relationships.
Unique: Implements automatic linking through continuous vector similarity computation rather than explicit backlink syntax or manual curation, creating emergent knowledge graphs that evolve as note content changes. Bidirectional linking is computed on-demand when notes are opened, avoiding expensive pre-computation of full similarity matrices.
vs alternatives: More discoverable than Obsidian's manual backlink system and more privacy-preserving than cloud-based note-linking services; less precise than human-curated links but requires zero manual effort to maintain.
Reor maintains conversation history in the chat interface, storing user messages and LLM responses with timestamps. The system preserves conversation context by including previous messages when generating new responses, enabling multi-turn dialogue. Conversation history is stored in-memory during the session; users can optionally save conversations to disk for later reference. The system manages context window constraints by truncating older messages if the full history exceeds the LLM's context limit.
Unique: Manages conversation history with context window awareness, automatically truncating older messages to fit within LLM limits. Conversations can be saved to disk as JSON or markdown for persistence and sharing.
vs alternatives: Simpler than ChatGPT's conversation management; no built-in search or organization but sufficient for single-session use cases.
Reor is built as an Electron application that runs on macOS (x64/ARM), Windows (x64), and Linux (x64), providing a native desktop experience across platforms. The build system packages the application for each platform with platform-specific optimizations (e.g., ARM support for Apple Silicon). Auto-update functionality checks for new releases and prompts users to upgrade, with differential updates to minimize download size.
Unique: Packages Reor as a native Electron app with platform-specific optimizations (ARM support for Apple Silicon) and auto-update functionality. LanceDB native bindings are compiled for each platform, enabling optimized vector database performance.
vs alternatives: More performant than web-based alternatives; larger download size and memory footprint than native apps but simpler to develop and maintain than separate native implementations.
While Reor is designed for local-first operation, it supports optional integration with cloud LLM providers (OpenAI, Anthropic) for users who prefer higher-quality models or need specific capabilities. Users can configure API keys in settings and switch between local and cloud models at runtime. The system maintains a unified chat interface regardless of LLM provider, with fallback logic to use local models if cloud API calls fail.
Unique: Provides optional cloud LLM integration while maintaining local-first as default, with unified chat interface and fallback logic. Users can switch providers at runtime without changing application code.
vs alternatives: More flexible than local-only systems; enables access to higher-quality models while preserving privacy-first design. Simpler than building separate cloud and local implementations.
Reor implements a Retrieval-Augmented Generation (RAG) chat system where user questions trigger semantic search across notes to retrieve relevant chunks, which are then passed as context to a local LLM (via Ollama or Transformers.js) for answer generation. The system manages a conversation history, formats retrieved note chunks as context, and streams LLM responses back to the UI. All processing occurs locally; no conversation data or note content is sent to external APIs unless explicitly configured to use cloud models (OpenAI/Anthropic).
Unique: Implements RAG by combining local semantic search (Transformers.js + LanceDB) with local LLM execution (Ollama), creating a fully offline Q&A system with no external API dependencies. Context retrieval is integrated into the chat flow via IPC communication between Electron main process (LLM execution) and renderer (UI), with streaming responses for real-time feedback.
vs alternatives: More private than ChatGPT plugins or cloud-based RAG services; slower response times than API-based alternatives but eliminates data transmission and API costs.
Reor provides an Obsidian-like markdown editor built into the Electron renderer process, supporting syntax highlighting, real-time preview, and backlink/wikilink syntax (`[[note-name]]`). The editor integrates with the note filesystem layer to enable creating, editing, and linking notes within the PKM system. Backlinks are rendered as clickable references that navigate to linked notes, and the editor supports standard markdown formatting with code block syntax highlighting.
Unique: Integrates markdown editing directly into Electron app with real-time backlink visualization and wikilink navigation, avoiding the need for external editors. Backlinks are computed from the vector similarity graph, so related notes surface automatically even without explicit `[[links]]`.
vs alternatives: More integrated than using VS Code or external editors; less feature-rich than Obsidian but tightly coupled with local AI capabilities for automatic linking and RAG.
Reor integrates with Ollama, a local LLM runtime, to execute language models entirely on the user's machine. The system allows users to configure which Ollama model to use for chat and text generation, with support for switching models without restarting the app. The main process communicates with Ollama via HTTP API calls, streaming responses back to the renderer for real-time display. Users can also configure cloud-based LLM providers (OpenAI, Anthropic) as fallbacks or alternatives.
Unique: Abstracts LLM execution behind a unified interface that supports both local Ollama models and cloud APIs (OpenAI/Anthropic), allowing users to switch providers without changing application code. Model configuration is persisted in settings and can be changed at runtime without app restart.
vs alternatives: More flexible than hardcoding a single LLM provider; slower than cloud APIs but eliminates API costs and data transmission. Ollama integration is simpler than managing LLM weights directly but requires external process management.
+5 more capabilities
Provides pre-trained 100-dimensional word embeddings derived from GloVe (Global Vectors for Word Representation) trained on English corpora. The embeddings are stored as a compact, browser-compatible data structure that maps English words to their corresponding 100-element dense vectors. Integration with wink-nlp allows direct vector retrieval for any word in the vocabulary, enabling downstream NLP tasks like semantic similarity, clustering, and vector-based search without requiring model training or external API calls.
Unique: Lightweight, browser-native 100-dimensional GloVe embeddings specifically optimized for wink-nlp's tokenization pipeline, avoiding the need for external embedding services or large model downloads while maintaining semantic quality suitable for JavaScript-based NLP workflows
vs alternatives: Smaller footprint and faster load times than full-scale embedding models (Word2Vec, FastText) while providing pre-trained semantic quality without requiring API calls like commercial embedding services (OpenAI, Cohere)
Enables calculation of cosine similarity or other distance metrics between two word embeddings by retrieving their respective 100-dimensional vectors and computing the dot product normalized by vector magnitudes. This allows developers to quantify semantic relatedness between English words programmatically, supporting downstream tasks like synonym detection, semantic clustering, and relevance ranking without manual similarity thresholds.
Unique: Direct integration with wink-nlp's tokenization ensures consistent preprocessing before similarity computation, and the 100-dimensional GloVe vectors are optimized for English semantic relationships without requiring external similarity libraries or API calls
vs alternatives: Faster and more transparent than API-based similarity services (e.g., Hugging Face Inference API) because computation happens locally with no network latency, while maintaining semantic quality comparable to larger embedding models
reor scores higher at 48/100 vs wink-embeddings-sg-100d at 24/100.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Retrieves the k-nearest words to a given query word by computing distances between the query's 100-dimensional embedding and all words in the vocabulary, then sorting by distance to identify semantically closest neighbors. This enables discovery of related terms, synonyms, and contextually similar words without manual curation, supporting applications like auto-complete, query suggestion, and semantic exploration of language structure.
Unique: Leverages wink-nlp's tokenization consistency to ensure query words are preprocessed identically to training data, and the 100-dimensional GloVe vectors enable fast approximate nearest-neighbor discovery without requiring specialized indexing libraries
vs alternatives: Simpler to implement and deploy than approximate nearest-neighbor systems (FAISS, Annoy) for small-to-medium vocabularies, while providing deterministic results without randomization or approximation errors
Computes aggregate embeddings for multi-word sequences (sentences, phrases, documents) by combining individual word embeddings through averaging, weighted averaging, or other pooling strategies. This enables representation of longer text spans as single vectors, supporting document-level semantic tasks like clustering, classification, and similarity comparison without requiring sentence-level pre-trained models.
Unique: Integrates with wink-nlp's tokenization pipeline to ensure consistent preprocessing of multi-word sequences, and provides simple aggregation strategies suitable for lightweight JavaScript environments without requiring sentence-level transformer models
vs alternatives: Significantly faster and lighter than sentence-level embedding models (Sentence-BERT, Universal Sentence Encoder) for document-level tasks, though with lower semantic quality — suitable for resource-constrained environments or rapid prototyping
Supports clustering of words or documents by treating their embeddings as feature vectors and applying standard clustering algorithms (k-means, hierarchical clustering) or dimensionality reduction techniques (PCA, t-SNE) to visualize or group semantically similar items. The 100-dimensional vectors provide sufficient semantic information for unsupervised grouping without requiring labeled training data or external ML libraries.
Unique: Provides pre-trained semantic vectors optimized for English that can be directly fed into standard clustering and visualization pipelines without requiring model training, enabling rapid exploratory analysis in JavaScript environments
vs alternatives: Faster to prototype with than training custom embeddings or using API-based clustering services, while maintaining semantic quality sufficient for exploratory analysis — though less sophisticated than specialized topic modeling frameworks (LDA, BERTopic)