CT scans of coffee-making equipment
478 by eucalyptuseye | 109 comments .
New best story on News: Show HN: Open-source obsidian.md sync server
Show HN: Open-source obsidian.md sync server
381 by acheong08 | 135 comments on News.
https://ift.tt/yqrwcWN Hello HN, I'm a recent high school graduate and can't afford $8 per month for the official sync service, so I tried my hand at replicating the server. It's still missing a few features, such as file recovery and history, but the basic sync is working. To the creators of Obsidian.md: I'm probably violating the TOS, and I'm sorry. I'll take down the repository if asked. It's not ready for production and is highly inefficient; Not competition, so I hope you'll be lenient.
381 by acheong08 | 135 comments on News.
https://ift.tt/yqrwcWN Hello HN, I'm a recent high school graduate and can't afford $8 per month for the official sync service, so I tried my hand at replicating the server. It's still missing a few features, such as file recovery and history, but the basic sync is working. To the creators of Obsidian.md: I'm probably violating the TOS, and I'm sorry. I'll take down the repository if asked. It's not ready for production and is highly inefficient; Not competition, so I hope you'll be lenient.
New best story on News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
396 by rushingcreek | 138 comments on News.
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/47OCcbh Phind-CodeLlama-34B-Python-v1: https://ift.tt/vipHQmb We'd love to hear your thoughts! Best, The Phind Team
396 by rushingcreek | 138 comments on News.
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/47OCcbh Phind-CodeLlama-34B-Python-v1: https://ift.tt/vipHQmb We'd love to hear your thoughts! Best, The Phind Team
New best story on Hacker News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
396 by rushingcreek | 138 comments on
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team
396 by rushingcreek | 138 comments on
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team
New best story on News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
387 by rushingcreek | 133 comments .
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team
387 by rushingcreek | 133 comments .
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team
New best story on Hacker News: Show HN: LLMs can generate valid JSON 100% of the time
Show HN: LLMs can generate valid JSON 100% of the time
532 by remilouf | 166 comments on
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5
532 by remilouf | 166 comments on
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5
New best story on News: Show HN: LLMs can generate valid JSON 100% of the time
Show HN: LLMs can generate valid JSON 100% of the time
520 by remilouf | 165 comments .
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5
520 by remilouf | 165 comments .
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5
New best story on News: Postgres Language Server
Postgres Language Server
822 by kiwicopple | 101 comments on News.
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ukdlQ8I ) who the creator and maintainer. [0] LSP: https://ift.tt/K7U9exJ [1] pganalyze: https://pganalyze.com/
822 by kiwicopple | 101 comments on News.
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ukdlQ8I ) who the creator and maintainer. [0] LSP: https://ift.tt/K7U9exJ [1] pganalyze: https://pganalyze.com/
New best story on Hacker News: Postgres Language Server
Postgres Language Server
821 by kiwicopple | 101 comments on
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/
821 by kiwicopple | 101 comments on
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/
New best story on News: Postgres Language Server
Postgres Language Server
821 by kiwicopple | 101 comments .
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/
821 by kiwicopple | 101 comments .
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/
Subscribe to:
Posts (Atom)
New best story on News: ChatControl: EU wants to scan all private messages, even in encrypted apps
ChatControl: EU wants to scan all private messages, even in encrypted apps 942 by Metalhearf | 515 comments on News.
-
Qualcomm and Apple agree to drop all litigation 467 by saeedjabbar | 122 comments on News.
-
NASA’s Science Mission Directorate will hold a community town hall meeting with Associate Administrator for Science Thomas Zurbuchen and his...
-
SubEthaEdit 5 is now free and open source 357 by schwuk | 29 comments on