New best story on News: Show HN: Open-source obsidian.md sync server

Show HN: Open-source obsidian.md sync server
381 by acheong08 | 135 comments on News.
https://ift.tt/yqrwcWN Hello HN, I'm a recent high school graduate and can't afford $8 per month for the official sync service, so I tried my hand at replicating the server. It's still missing a few features, such as file recovery and history, but the basic sync is working. To the creators of Obsidian.md: I'm probably violating the TOS, and I'm sorry. I'll take down the repository if asked. It's not ready for production and is highly inefficient; Not competition, so I hope you'll be lenient.

New best story on News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B

Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
396 by rushingcreek | 138 comments on News.
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/47OCcbh Phind-CodeLlama-34B-Python-v1: https://ift.tt/vipHQmb We'd love to hear your thoughts! Best, The Phind Team

New best story on Hacker News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B

Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
396 by rushingcreek | 138 comments on
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team

New best story on News: Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B

Beating GPT-4 on HumanEval with a fine-tuned CodeLlama-34B
387 by rushingcreek | 133 comments .
Hi HN, We have fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieved 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieved 67%. To ensure result validity, we applied OpenAI's decontamination methodology to our dataset. The CodeLlama models released yesterday demonstrate impressive performance on HumanEval. - CodeLlama-34B achieved 48.8% pass@1 on HumanEval - CodeLlama-34B-Python achieved 53.7% pass@1 on HumanEval We have fine-tuned both models on a proprietary dataset of ~80k high-quality programming problems and solutions. Instead of code completion examples, this dataset features instruction-answer pairs, setting it apart structurally from HumanEval. We trained the Phind models over two epochs, for a total of ~160k examples. LoRA was not used — both models underwent a native fine-tuning. We employed DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours using 32 A100-80GB GPUs, with a sequence length of 4096 tokens. Furthermore, we applied OpenAI's decontamination methodology to our dataset to ensure valid results, and found no contaminated examples. The methodology is: - For each evaluation example, we randomly sampled three substrings of 50 characters or used the entire example if it was fewer than 50 characters. - A match was identified if any sampled substring was a substring of the processed training example. For further insights on the decontamination methodology, please refer to Appendix C of OpenAI's technical report. Presented below are the pass@1 scores we achieved with our fine-tuned models: - Phind-CodeLlama-34B-v1 achieved 67.6% pass@1 on HumanEval - Phind-CodeLlama-34B-Python-v1 achieved 69.5% pass@1 on HumanEval Note on GPT-4 According to the official technical report in March, OpenAI reported a pass@1 score of 67% for GPT-4's performance on HumanEval. Since then, there have been claims reporting higher scores. However, it's essential to note that there hasn't been any concrete evidence pointing towards an enhancement in the model's coding abilities since then. It's also crucial to highlight that these elevated figures lack the rigorous contamination analysis that the official statistic underwent, making them less of a reliable comparison. As a result, we consider 67% as the pass@1 score for GPT-4. Download We are releasing both models on Huggingface for verifiability and to bolster the open-source community. We welcome independent verification of results. Phind-CodeLlama-34B-v1: https://ift.tt/VtG3MCY Phind-CodeLlama-34B-Python-v1: https://ift.tt/r5QeBxC We'd love to hear your thoughts! Best, The Phind Team

New best story on News: Hugging Face raises $235M from investors including Salesforce and Nvidia

Hugging Face raises $235M from investors including Salesforce and Nvidia
365 by immortal3 | 195 comments .


New best story on News: Hugging Face raises $235M from investors including Salesforce and Nvidia

Hugging Face raises $235M from investors including Salesforce and Nvidia
365 by immortal3 | 195 comments on News.


New best story on Hacker News: Hugging Face raises $235M from investors including Salesforce and Nvidia

Hugging Face raises $235M from investors including Salesforce and Nvidia
365 by immortal3 | 195 comments on


New best story on News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
494 by marcopicentini | 388 comments on News.


New best story on Hacker News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
494 by marcopicentini | 388 comments on


New best story on News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
487 by marcopicentini | 387 comments .


New best story on News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
484 by nickthegreek | 192 comments on News.


New best story on Hacker News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
470 by nickthegreek | 188 comments on


New best story on News: Code Llama, a state-of-the-art large language model for coding

Code Llama, a state-of-the-art large language model for coding
451 by nickthegreek | 184 comments .


New best story on News: Common mistakes in salary negotiation

Common mistakes in salary negotiation
392 by eamonnm | 315 comments .


New best story on News: AI real-time human full-body photo generator

AI real-time human full-body photo generator
402 by bookofjoe | 246 comments on News.


New best story on Hacker News: AI real-time human full-body photo generator

AI real-time human full-body photo generator
401 by bookofjoe | 244 comments on


New best story on News: AI real-time human full-body photo generator

AI real-time human full-body photo generator
394 by bookofjoe | 238 comments .


New best story on Hacker News: Don't fire your illustrator

Don't fire your illustrator
372 by todsacerdoti | 314 comments on


New best story on News: Don't fire your illustrator

Don't fire your illustrator
371 by todsacerdoti | 314 comments .


New best story on News: Amsterdam to use “noise cameras” against too loud cars

Amsterdam to use “noise cameras” against too loud cars
439 by cactusplant7374 | 353 comments on News.


New best story on Hacker News: Amsterdam to use “noise cameras” against too loud cars

Amsterdam to use “noise cameras” against too loud cars
439 by cactusplant7374 | 353 comments on


New best story on News: Amsterdam to use “noise cameras” against too loud cars

Amsterdam to use “noise cameras” against too loud cars
439 by cactusplant7374 | 352 comments .


New best story on News: Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing

Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing
406 by RickJWagner | 270 comments on News.


New best story on Hacker News: Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing

Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing
405 by RickJWagner | 270 comments on


New best story on News: Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing

Mister Rogers had a point – routinely greeting six neighbors maximizes wellbeing
402 by RickJWagner | 269 comments .


New best story on News: How to communicate when trust is low without digging yourself into a deeper hole

How to communicate when trust is low without digging yourself into a deeper hole
483 by zdw | 168 comments .


New best story on News: How to communicate when trust is low without digging yourself into a deeper hole

How to communicate when trust is low without digging yourself into a deeper hole
465 by zdw | 155 comments on News.


New best story on News: Retrieving 1TB of data from a faulty drive with the help of woodworking tools

Retrieving 1TB of data from a faulty drive with the help of woodworking tools
440 by jgrahamc | 149 comments on News.


New best story on Hacker News: Retrieving 1TB of data from a faulty drive with the help of woodworking tools

Retrieving 1TB of data from a faulty drive with the help of woodworking tools
440 by jgrahamc | 149 comments on


New best story on News: Retrieving 1TB of data from a faulty drive with the help of woodworking tools

Retrieving 1TB of data from a faulty drive with the help of woodworking tools
433 by jgrahamc | 149 comments .


New best story on News: Opendream: A layer-based UI for Stable Diffusion

Opendream: A layer-based UI for Stable Diffusion
465 by varunshenoy | 142 comments .


New best story on News: Things you forgot (or never knew) because of React

Things you forgot (or never knew) because of React
509 by inner_square | 599 comments .


New best story on News: OpenFarm – a free and open database and web application for gardening knowledge

OpenFarm – a free and open database and web application for gardening knowledge
519 by lasermatts | 50 comments .


New best story on News: Htmx is part of the GitHub Accelerator

Htmx is part of the GitHub Accelerator
593 by jjdeveloper | 290 comments on News.


New best story on Hacker News: Htmx is part of the GitHub Accelerator

Htmx is part of the GitHub Accelerator
593 by jjdeveloper | 290 comments on


New best story on News: Htmx is part of the GitHub Accelerator

Htmx is part of the GitHub Accelerator
579 by jjdeveloper | 285 comments .


New best story on News: How Is LLaMa.cpp Possible?

How Is LLaMa.cpp Possible?
581 by birriel | 193 comments on News.


New best story on Hacker News: How Is LLaMa.cpp Possible?

How Is LLaMa.cpp Possible?
581 by birriel | 193 comments on


New best story on News: The OpenTF Manifesto

The OpenTF Manifesto
541 by CathalMullan | 310 comments .


New best story on News: Firefox finally outperforming Google Chrome in SunSpider

Firefox finally outperforming Google Chrome in SunSpider
587 by marcodiego | 299 comments on News.


New best story on Hacker News: Firefox finally outperforming Google Chrome in SunSpider

Firefox finally outperforming Google Chrome in SunSpider
587 by marcodiego | 299 comments on


New best story on News: Firefox finally outperforming Google Chrome in SunSpider

Firefox finally outperforming Google Chrome in SunSpider
579 by marcodiego | 298 comments .


New best story on News: Stellar Developers

Stellar Developers
602 by ProblemSix | 1 comments on News.


New best story on Hacker News: Stellar Developers

Stellar Developers
602 by ProblemSix | 0 comments on


New best story on News: Stellar Developers

Stellar Developers
552 by ProblemSix | 0 comments .


New best story on Hacker News: Show HN: LLMs can generate valid JSON 100% of the time

Show HN: LLMs can generate valid JSON 100% of the time
532 by remilouf | 166 comments on
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5

New best story on News: Show HN: LLMs can generate valid JSON 100% of the time

Show HN: LLMs can generate valid JSON 100% of the time
520 by remilouf | 165 comments .
Outlines is a Python library that focuses on text generation with large language models. Brandon and I are not LLM experts and started the project a few months ago because we wanted to understand better how the generation process works. Our original background is probabilistic, relational and symbolic programming. Recently we came up with a fast way to generate text that matches a regex ( https://ift.tt/ks1B6f3... ). The basic idea is simple: regular expressions have an equivalent Deterministic-Finite Automaton (DFA) representation. We can transform this DFA into a generative model: in each state we get a list of symbols which correspond to completions that partially match the regular expression. We mask the other symbols in the logits returned by a large language model, sample a new symbol and move to the next state. The subtelty is that language models work with tokens, not symbols, so we derive a new FSM whose alphabet is the model's vocabulary. We can do this in only one pass over the vocabulary. Generating the token masks thus only requires a dictionary lookup at each state. Our method blows other libraries like Microsoft's guidance out of the water. From there it was only a small leap to be able to generate text that follows a JSON schema ( https://ift.tt/7QpKsL9 ), or is parseable into a Pydantic model ( https://ift.tt/zfZJRbS ). The method works with union types, optional types, nested schemas, arrays, everything. It is guaranteed that the output is parseable. I think it's cool, and I've spent a lot of time watching even tiny models output valid JSON over the weekend. Hope you will too. I look forward to feedback, bug reports, feature requests and discussions! Edit: Link to our pre-print explaining the method and how this can be extended to generate text that follows a Context-Free Grammar https://ift.tt/PkBThz5

New best story on News: Squeeze the hell out of the system you have

Squeeze the hell out of the system you have
674 by sbmsr | 365 comments .


New best story on News: Azure ChatGPT: Private and secure ChatGPT for internal enterprise use

Azure ChatGPT: Private and secure ChatGPT for internal enterprise use
727 by taubek | 264 comments on News.


New best story on Hacker News: Azure ChatGPT: Private and secure ChatGPT for internal enterprise use

Azure ChatGPT: Private and secure ChatGPT for internal enterprise use
726 by taubek | 264 comments on


New best story on News: Azure ChatGPT: Private and secure ChatGPT for internal enterprise use

Azure ChatGPT: Private and secure ChatGPT for internal enterprise use
724 by taubek | 264 comments .


New best story on News: Vim Boss

Vim Boss
657 by bpierre | 58 comments on News.


New best story on Hacker News: Vim Boss

Vim Boss
657 by bpierre | 58 comments on


New best story on News: Vim Boss

Vim Boss
642 by bpierre | 57 comments .


New best story on News: CNET is deleting old articles to try to improve its Google Search ranking

CNET is deleting old articles to try to improve its Google Search ranking
616 by mikece | 384 comments .


New best story on News: CNET is deleting old articles to try to improve its Google Search ranking

CNET is deleting old articles to try to improve its Google Search ranking
601 by mikece | 371 comments on News.


New best story on Hacker News: CNET is deleting old articles to try to improve its Google Search ranking

CNET is deleting old articles to try to improve its Google Search ranking
600 by mikece | 370 comments on


New best story on News: Postgres Language Server

Postgres Language Server
822 by kiwicopple | 101 comments on News.
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ukdlQ8I ) who the creator and maintainer. [0] LSP: https://ift.tt/K7U9exJ [1] pganalyze: https://pganalyze.com/

New best story on Hacker News: Postgres Language Server

Postgres Language Server
821 by kiwicopple | 101 comments on
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/

New best story on News: Postgres Language Server

Postgres Language Server
821 by kiwicopple | 101 comments .
hey HN. this is a Language Server[0] designed specifically for Postgres. A language server adds features to IDEs (VSCode, NeoVim, etc) - features like auto-complete, go-to-definition, or documentation on hover, etc. there have been previous attempts at adding Postgres support to code editors. usually these attempts implement a generic SQL parser and then offer various "flavours" of SQL. This attempt is different because it uses the actual Postgres parser to do the heavy-lifting. This is done via libg_query, an excellent C library for accessing the PostgreSQL parser outside of the server. We feel this is a better approach because it gives developers 100% confidence in the parser, and it allows us to keep up with the rapid development of Postgres. this is still in early development, and mostly useful for testers/collaborators. the majority of work is still ahead, but we've verified that the approach works. we're making it public now so that we can develop it in the open with input from the community. a lot of the credit belongs to pganalyze[1] for their work on libpg_query, and to psteinroe ( https://ift.tt/ErI7mKp ) who the creator and maintainer. [0] LSP: https://ift.tt/6HoUYjb [1] pganalyze: https://pganalyze.com/

New best story on News: Bram Moolenaar Passed Away

Bram Moolenaar Passed Away
1147 by wufocaculura | 135 comments on News.


New best story on Hacker News: Bram Moolenaar Passed Away

Bram Moolenaar Passed Away
1147 by wufocaculura | 135 comments on


New best story on News: Bram Moolenaar Passed Away

Bram Moolenaar Passed Away
1078 by wufocaculura | 128 comments .


New best story on News: Most promoted and blocked domains among Kagi Search users

Most promoted and blocked domains among Kagi Search users
793 by tech234a | 384 comments .


New best story on News: Most promoted and blocked domains among Kagi Search users

Most promoted and blocked domains among Kagi Search users
778 by tech234a | 376 comments on News.


New best story on Hacker News: Most promoted and blocked domains among Kagi Search users

Most promoted and blocked domains among Kagi Search users
778 by tech234a | 376 comments on


New best story on News: Successful room temperature ambient-pressure magnetic levitation of LK-99

Successful room temperature ambient-pressure magnetic levitation of LK-99
752 by spekcular | 317 comments on News.


New best story on Hacker News: Successful room temperature ambient-pressure magnetic levitation of LK-99

Successful room temperature ambient-pressure magnetic levitation of LK-99
748 by spekcular | 307 comments on


New best story on News: Successful room temperature ambient-pressure magnetic levitation of LK-99

Successful room temperature ambient-pressure magnetic levitation of LK-99
734 by spekcular | 297 comments .


New best story on Hacker News: Observation of zero resistance above 100 K in Pb₁₀₋â‚“Cuâ‚“(PO₄)₆O

Observation of zero resistance above 100 K in Pb₁₀₋â‚“Cuâ‚“(PO₄)₆O
600 by segfaultbuserr | 264 comments on


New best story on News: Google’s Plan to DRM the Web Goes Against Everything Google Once Stood For

Google’s Plan to DRM the Web Goes Against Everything Google Once Stood For
586 by g0xA52A2A | 191 comments .


New best story on News: Observation of zero resistance above 100 K in Pb₁₀₋â‚“Cuâ‚“(PO₄)₆O

Observation of zero resistance above 100 K in Pb₁₀₋â‚“Cuâ‚“(PO₄)₆O
574 by segfaultbuserr | 251 comments .


New best story on News: LK-99: Team of Southeast University observed zero resistance below 110 K

LK-99: Team of Southeast University observed zero resistance below 110 K
568 by thecopy | 309 comments on News.


New best story on Hacker News: LK-99: Team of Southeast University observed zero resistance below 110 K

LK-99: Team of Southeast University observed zero resistance below 110 K
568 by thecopy | 309 comments on


New best story on News: LK-99: Team of Southeast University observed zero resistance below 110 K

LK-99: Team of Southeast University observed zero resistance below 110 K
565 by thecopy | 309 comments .


New best story on News: Electronic Structure of LK-99

Electronic Structure of LK-99
477 by spekcular | 357 comments on News.


New best story on News: I'm betting on HTML

I'm betting on HTML
578 by catskull | 231 comments on News.


New best story on Hacker News: I'm betting on HTML

I'm betting on HTML
578 by catskull | 231 comments on


New best story on News: I'm betting on HTML

I'm betting on HTML
567 by catskull | 224 comments .


New best story on News: ChatControl: EU wants to scan all private messages, even in encrypted apps

ChatControl: EU wants to scan all private messages, even in encrypted apps 942 by Metalhearf | 515 comments on News.