Name
Affiliation
Papers
FIDEL CACHEDA
Department of Information and Communications Technologies, University of A Coruña, A Coruña, Spain
59
Collaborators
Citations 
PageRank 
52
463
33.19
Referers 
Referees 
References 
964
972
715
Search Limit
100972
Title
Citations
PageRank
Year
Early detection of cyberbullying on social media networks10.342021
A Content-Based Approach To Profile Expansion00.342020
Annotated Dataset for Anomaly Detection in a Data Center with IoT Sensors.00.342020
HOPE (High Order Profile Expansion) for the New User Problem on Recommender Systems.00.342020
Early Intrusion Detection for OS Scan Attacks10.372019
Analysis and Experiments on Early Detection of Depression.00.342018
Characterizing and Predicting Users' Behavior on Local Search Queries.10.352018
A Practical Application of a Dataset Analysis in an Intrusion Detection System00.342018
Click-through prediction when searching local businesses.00.342018
Click Through Rate Prediction for Local Search Results.20.362017
Advancing Network Flow Information Using Collaborative Filtering00.342017
Using Collaborative Filtering in a new domain: traffic analysis.00.342016
Information Retrieval and Recommender Systems.10.352015
Distributed and collaborative Web Change Detection system.20.372015
Queuing Theory-based Latency/Power Tradeoff Models for Replicated Search Engines.10.352015
Distributed architecture for k-nearest neighbors recommender systems30.372015
A self-adapting latency/power tradeoff model for replicated search engines90.512014
Soft-404 Pages, A Crawling Problem.00.342014
Using profile expansion techniques to alleviate the new user problem140.542013
Analysing Relevant Diseases from Iberian Tweets.00.342013
Using rating matrix compression techniques to speed up collaborative recommendations20.432013
Hybrid query scheduling for a replicated search engine60.432013
SAAD, a content based Web Spam Analyzer and Detector50.402013
Scheduling queries across replicas20.402012
A scale for crawler effectiveness on the client-side hidden web.10.352012
Architecture for a Garbage-less and Fresh Content Search Engine.10.372012
Analysing the Effectiveness of Crawlers on the Client-Side Hidden Web.20.362012
Analysis and detection of web spam by means of web content50.432012
Using Neighborhood Pre-computation to Increase Recommendation Efficiency.10.362012
The Spanish Web in Numbers - Main Features of the Spanish Hidden Web.00.342011
Improving k-nearest neighbors algorithms: practical application of dataset analysis10.362011
Comparison of collaborative filtering algorithms: Limitations of current techniques and proposals for scalable, high-performance recommender systems1453.922011
Performance Evaluation of Large-scale Information Retrieval Systems Scaling Down.00.342010
Search shortcuts: a new approach to the recommendation of queries160.732009
Rembassy: Open Source Tool For Network Monitoring00.342009
Search shortcuts: driving users towards their goals20.362009
Search shortcuts using click-through data20.372009
Extracting lists of data records from semi-structured web pages411.342008
Performance comparison of clustered and replicated information retrieval systems40.462007
Open Source Tool for Management Network Information00.342007
Finding and Extracting Data Records from Web Pages90.482007
Performance analysis of distributed information retrieval architectures using an improved network simulation model180.782007
Crawling the content hidden behind web forms220.882007
Using clustering and edit distance techniques for automatic web data extraction70.462007
DeepBot: a focused crawler for accessing hidden web content130.702007
An automatic approach to displaying web applications as portlets20.432006
Hybrid Architecture for Web Search Systems Based on Hierarchical Taxonomies00.342006
A decision mechanism for the selective combination of evidence in topic distillation10.382006
A Task-Specific Approach For Crawling The Deep Web20.382006
A case study of distributed information retrieval architectures to index one terabyte of text281.222005
  • 1
  • 2