Tuesday, December 1, 2015

Artificial Intelligence Simplifies the Search for Relevant Scientific Literature




We’ve all been there, spending hours on end searching through the Internet for science papers. It feels like many of the scholar search engines have a hit an accuracy of Yahoo! circa 1998.
Needless to say, engineers and researchers have wasted many hours playing with keywords to find a stack of papers. And once they have that stack, another few hours sifting through abstracts to see if the stack of papers is useful, let alone on-topic.



To improve the quality of research journal searches, the Allen Institute for Artificial Intelligence (AI2) has released its free Semantic Scholar service. Semantic Scholar can automatically search the Internet for millions of science papers published every year and categorize them into usable topics.
Similar to Google, the Semantic Scholar will crawl the Internet using data-mining techniques to find publically-available science papers. Using computer vision tools, Semantic Scholar is able to extract the text, diagrams and captions for indexing and contextual determination. Finally, the tool uses natural language processing to filter the papers, extract who cites which and determine the paper’s quality.
Currently, the service has shifted through three million computer science papers and will continue to add categories in the future.
"Semantic Scholar is a first step toward AI-based discovery engines that will be able to connect the dots between disparate studies to identify novel hypotheses and suggest experiments that would otherwise be missed," said Oren Etzioni, CEO at AI2. “Our goal is to enable researchers to find answers to some of science's thorniest problems."
The mobile-ready Semantic Scholar interface will have various functions typical of scientific journal search engines. For instance, users can filter results by author, publication, topic and date. This is a standard in scholar search engines.
However, users will also be able to see who has cited the papers. This useful tool is only seen in some more advanced science search engines. Additionally, Semantic Scholar has a rare science search engine ability to give users access to the figures and findings in the paper.
At the end of the day, though, what really sets this search engine apart is its data mining and artificial intelligence capabilities.
"No one can keep up with the explosive growth of scientific literature," said Etzioni. "Which papers are most relevant? Which are considered the highest quality? Is anyone else working on this specific or related problem? Now, researchers can begin to answer these questions in seconds, speeding research and solving big problems faster."

I just wonder where this tool was during my thesis literature review.

Will you be using Semantic Scholar? What is your favorite scientific journal search engine?

Monday, November 30, 2015

Decision Making problem by means of fuzzy logic in program language MATLAB








DECISION MAKING PROBLEMS IN MATLAB

The M-file BF.m provides the calculation. See prog.1.

clear all
B1v = readfis('B1.fis');
UdajB1 = input('Input values in the form [I3a; I3b; I3c]: ');
VyhB1 = evalfis(UdajB1, B1v);
B2v = readfis('B2.fis');
UdajB2 = input('Input values in the form [I4a; I4b]: ');
VyhB2 = evalfis(UdajB2, B2v);
BFv = readfis('BF.fis');
UdajBF=input('Input values in the form [I1;I2]: ');
UdajBF(3) = VyhB1;
UdajBF(4) = VyhB2;
VyhBF = evalfis(UdajBF, BFv);
if VyhBF<0.5 'Reject'
elseif VyhBF<0.8 'Monitor'
else 'Accept'
end
fuzzy(BFv)
mfedit(BFv)
ruleedit(BFv)
surfview(BFv)
ruleview(BFv)


The results of calculation are presented by inputs I1,I2, I3a, I3b, I3c, I4a, I4b with values 0, 1 and 0.5. The results are Reject, Accept and Monitor.

Input values in the form [I3a; I3b; I3c]: [0;0;0]
Input values in the form [I4a; I4b]: [0;0]
Input values in the form [I1;I2]: [0;0]
ans =Reject
Input values in the form [I3a; I3b; I3c]: [1;1;1]
Input values in the form [I4a; I4b]: [1;1]
Input values in the form [I1;I2]: [1;1]
ans =Accept
Input values in the form [I3a; I3b; I3c]: [0.5; 0.5; 0.5]
Input values in the form [I4a; I4b]: [0.5; 0.5]
Input values in the form [I1;I2]: [0.5; 0.5]
ans =Monitor

Saturday, October 24, 2015

Guidelines for Manuscript Writing




Getting published: What distinguishes a good manuscript from a bad one? from Elsevier



Guidelines for Reviewing the manuscripts from Elsevier

How to review manuscripts — your ultimate checklist

Quick Guides on the Elsevier Publishing Campus

 

Friday, August 28, 2015

PYTHON PACKAGES FOR DATA MINING





The intelligent key thing is when you use  the same hammer to solve what ever problem you came across. Like the same way when we indented to solve a data mining problem  we will face so many issues but we can solve them by using python in a intelligent way.


Before stepping directly to Python packages, let me clear up any doubts you may have about why you should be using Python.

WHY PYTHON ?

We all know that python is powerful programming language, but what does that mean, exactly? What makes python  a powerful programming language?

PYTHON IS EASY

Universally, Python has gained a reputation because of it’s easy to learn. The syntax of Python programming language is designed to be easily readable. Python has significant popularity in  scientific computing. The people working in this field are scientists first, and programmers second.

PYTHON IS EFFICIENT

Nowadays we working on bulk amount of data, popularly known as big data.  The more data you have to process, the more important it becomes to manage the memory you use. Here Python will work very efficiently.

PYTHON IS FAST

We all know Python is an interpreted language, we may think that it is slow, but some amazing work has been done over the past years to improve Python’s performance. My point is that if you want to do high-performance computing, Python is a viable best option today.
Hope I cleared your doubt about “Why Python?”, so let me jump to Python Packages for data mining.

NUMPY

Numpylogo
 About:
NumPy is the fundamental package for scientific computing with Python. NumPy is an extension to the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric, was originally created by Jim Hugunin with contributions from several other developers. In 2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications.
Original author(s)Travis Oliphant
Developer(s)Community project
Initial releaseAs Numeric, 1995; as NumPy, 2006
Stable release1.9.0 / 7 September 2014; 36 days ago
Written inPython, C
Operating systemCross-platform
TypeTechnical computing
LicenseBSD-new license
Websitewww.numpy.org
Installing numpy:
If Python is not installed in your computer please install it first.
Installing numpy in linux
Open your terminal and copy these commands:
sudo apt-get update
sudo apt-get install python-numpy
Sample numpy code for using reshape function
[code language=”css”]from numpy import *
a = arange(12)
a = a.reshape(3,2,2)
print a [/code]
Script output
[[[ 0 1]
[ 2 3]]
[[ 4 5]
[ 6 7]]
[[ 8 9]
[10 11]]]

SCIPY

scipy_logo
About:
SciPy (pronounced “Sigh Pie”) is open-source software for mathematics, science, and engineering. The SciPy library depends on NumPy, which provides convenient and fast N-dimensional array manipulation. The SciPy library is built to work with NumPy arrays, and provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization. Together, they run on all popular operating systems, are quick to install, and are free of charge. NumPy and SciPy are easy to use, but powerful enough to be depended upon by some of the world’s leading scientists and engineers. If you need to manipulate numbers on a computer and display or publish the results, Scipy is the tool for the job.
Original author(s)Travis Oliphant, Pearu Peterson, Eric Jones
Developer(s)Community library project
Stable release0.14.0 / 3 May 2014; 5 months ago
Written inPythonFortranCC++[1]
Operating systemCross-platform (list)
TypeTechnical computing
LicenseBSD-new license
Websitewww.scipy.org
 Installing SciPy in linux
Open your terminal and copy these commands:
sudo apt-get update
sudo apt-get install python-scipy
Sample SciPy code
[code language=”css”] from scipy import special, optimize
f = lambda x: -special.jv(3, x)
sol = optimize.minimize(f, 1.0)
x = linspace(0, 10, 5000)
plot(x, special.jv(3, x), ‘-‘, sol.x, -sol.fun, ‘o’)
savefig(‘plot.png’, dpi=96)[/code]
 Script output
Screenshot from 2014-10-29 19:36:33

PANDAS

pandas
About:
Pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal.
Pandas is well suited for many different kinds of data:
  • Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet.
  • Ordered and unordered (not necessarily fixed-frequency) time series data.
  • Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels.
  • Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure.
Installing Pandas in Linux
Open your terminal and copy these commands:
sudo apt-get update
sudo apt-get install python-pandas
Sample Pandas code about Pandas Series
[code language=”css”]import pandas as pd
values = np.array([2.0, 1.0, 5.0, 0.97, 3.0, 10.0, 0.0599, 8.0])
ser = pd.Series(values)
print ser[/code]
Script output
0 2.0000
1 1.0000
2 5.0000
3 0.9700
4 3.0000
5 10.0000
6 0.0599
7 8.0000

MATPLOTLIB

540px-Matplotlib_logo.svg

About:
matplotlib is a plotting library for the Python programming language and its NumPy numerical mathematics extension. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB. SciPy makes use of matplotlib.
Original author(s)John Hunter
Developer(s)Michael Droettboom, et al.
Stable release1.4.2 (26 October 2014; 3 days ago) [±]
Written inPython
Operating systemCross-platform
TypePlotting
Licensematplotlib license
Websitematplotlib.org
Installing Matplotlib in linux
Open your terminal and copy these commands:
sudo apt-get update
sudo apt-get install python-matplotlib
Sample Matplotlib code to Create Histograms
[code language=”css”]import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# example data
mu = 100 # mean of distribution
sigma = 15 # standard deviation of distribution
x = mu + sigma * np.random.randn(10000)
num_bins = 50
# the histogram of the data
n, bins, patches = plt.hist(x, num_bins, normed=1, facecolor=’green’, alpha=0.5)
# add a ‘best fit’ line
y = mlab.normpdf(bins, mu, sigma)
plt.plot(bins, y, ‘r–‘)
plt.xlabel(‘Smarts’)
plt.ylabel(‘Probability’)
plt.title(r’Histogram of IQ: $\mu=100$, $\sigma=15$’)
# Tweak spacing to prevent clipping of ylabel
plt.subplots_adjust(left=0.15)
plt.show()[/code]
Script output
Screenshot from 2014-10-29 19:55:21

IPYTHON

ipython
IPython is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history. IPython currently provides the following features:
  • Powerful interactive shells (terminal and Qt-based).
  • A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media.
  • Support for interactive data visualization and use of GUI toolkits.
  • Flexible, embeddable interpreters to load into one’s own projects.
  • Easy to use, high performance tools for parallel computing.
Original author(s)Fernando Perez and others
Stable release2.3 / 1 October 2014; 27 days ago
Written inPythonJavaScriptCSS,HTML
Operating systemCross-platform
TypeShell
LicenseBSD
Websitewww.ipython.org
Installing IPython in linux
Open your terminal and copy these commands:
sudo apt-get update
sudo pip install ipython
Sample IPython code
This piece of code is to plot demonstrating the integral as the area under a curve
[code language=”css”]import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
def func(x):
return (x – 3) * (x – 5) * (x – 7) + 85
a, b = 2, 9 # integral limits
x = np.linspace(0, 10)
y = func(x)
fig, ax = plt.subplots()
plt.plot(x, y, ‘r’, linewidth=2)
plt.ylim(ymin=0)
# Make the shaded region
ix = np.linspace(a, b)
iy = func(ix)
verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)]
poly = Polygon(verts, facecolor=’0.9′, edgecolor=’0.5′)
ax.add_patch(poly)
plt.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$",
horizontalalignment=’center’, fontsize=20)
plt.figtext(0.9, 0.05, ‘$x$’)
plt.figtext(0.1, 0.9, ‘$y$’)
ax.spines[‘right’].set_visible(False)
ax.spines[‘top’].set_visible(False)
ax.xaxis.set_ticks_position(‘bottom’)
ax.set_xticks((a, b))
ax.set_xticklabels((‘$a$’, ‘$b$’))
ax.set_yticks([])
plt.show()[/code]
Script output
area_fig

SCIKIT-LEARN

scikit-learn-logo
The scikit-learn project started as scikits.learn, a Google Summer of Code project by David Cournapeau. Its name stems from the notion that it is a “SciKit” (SciPy Toolkit), a separately-developed and distributed third-party extension to SciPy. The original codebase was later extensively rewritten by other developers. Of the various scikits, scikit-learn as well as scikit-image were described as “well-maintained and popular” in November 2012.
Original author(s)David Cournapeau
Initial releaseJune 2007; 7 years ago[1]
Stable release0.15.1 / August 1, 2014; 2 months ago[2]
Written inPythonCythonC andC++
Operating systemLinuxMac OS X,Microsoft Windows
TypeLibrary for machine learning
LicenseBSD License
Websitescikit-learn.org
Installing Scikit-learn in linux
Open your terminal and copy these commands
sudo apt-get update
sudo apt-get install python-sklearn
Sample Scikit-learn code
[code language=”css”]import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis]
diabetes_X_temp = diabetes_X[:, :, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X_temp[:-20]
diabetes_X_test = diabetes_X_temp[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# The coefficients
print(‘Coefficients: \n’, regr.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr.predict(diabetes_X_test) – diabetes_y_test) ** 2))
# Explained variance score: 1 is perfect prediction
print(‘Variance score: %.2f’ % regr.score(diabetes_X_test, diabetes_y_test))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color=’black’)
plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color=’blue’,
linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show() [/code]
Script output
Coefficients:
[ 938.23786125]
Residual sum of squares: 2548.07
Variance score: 0.47
linera
I have explained the packages which we are going to use in coming posts to solve some interesting problems.
Please leave your comment if you have any other Python data mining packages to add to this list.
Originally published here.

Data Mining - Fruitful and Fun

Orange Tutorial

http://www.orange.biolab.si/tutorial/rst/index.html

Sunday, July 26, 2015

R vs Python






whether one should use R or Python when performing their day-to-day data analysis tasks. Both Python and R are amongst the most popular languages for data analysis, and have their supporters and opponents. While Python is often praised for being a general-purpose language with an easy-to-understand syntax, R’s functionality is developed with statisticians in mind, thereby giving it field-specific advantages such as great features for data visualization.

Our new infographic”Data Science Wars: R vs Python” is therefore for everyone interested in how these two (statistical) programming languages relate to each other. The infographic explores what the strengths of R are over Python and vice versa, and aims to provide a basic comparison between these two programming languages from a data science and statistics perspective.


Sunday, July 19, 2015

The Top 10 research papers in computer science by Mendeley readership.






Binary battle to promote applications built on the Mendeley API (now including PLoS as well), Now take a look at the data to see what people have to work with. The analysis is  focused on our second largest discipline, Computer Science. Biological Sciences is the largest, but I started with this one so that I could look at the data with fresh eyes, and also because it’s got some really cool papers to talk about.

It was a fascinating list of topics, with many of the expected fundamental papers like Shannon’s Theory of Information and the Google paper, a strong showing from Mapreduce and machine learning, but also some interesting hints that augmented reality may be becoming more of an actual reality soon.




 
1. Latent Dirichlet Allocation (available full-text)
LDA is a means of classifying objects, such as documents, based on their underlying topics. I was surprised to see this paper as number one instead of Shannon’s information theory paper (#7) or the paper describing the concept that became Google (#3). It turns out that interest in this paper is very strong among those who list artificial intelligence as their subdiscipline. In fact, AI researchers contributed the majority of readership to 6 out of the top 10 papers. Presumably, those interested in popular topics such as machine learning list themselves under AI, which explains the strength of this subdiscipline, whereas papers like the Mapreduce one or the Google paper appeal to a broad range of subdisciplines, giving those papers a smaller numbers spread across more subdisciplines. Professor Blei is also a bit of a superstar, so that didn’t hurt. (the irony of a manually-categorized list with an LDA paper at the top has not escaped us)

It’s no surprise to see this in the Top 10 either, given the huge appeal of this parallelization technique for breaking down huge computations into easily executable and recombinable chunks. The importance of the monolithic “Big Iron” supercomputer has been on the wane for decades. The interesting thing about this paper is that had some of the lowest readership scores of the top papers within a subdiscipline, but folks from across the entire spectrum of computer science are reading it. This is perhaps expected for such a general purpose technique, but given the above it’s strange that there are no AI readers of this paper at all.

In this paper, Google founders Sergey Brin and Larry Page discuss how Google was created and how it initially worked. This is another paper that has high readership across a broad swath of disciplines, including AI, but wasn’t dominated by any one discipline. I would expect that the largest share of readers have it in their library mostly out of curiosity rather than direct relevance to their research. It’s a fascinating piece of history related to something that has now become part of our every day lives. 

This paper was new to me, although I’m sure it’s not new to many of you. This paper describes how to identify objects in a video stream without regard to how near or far away they are or how they’re oriented with respect to the camera. AI again drove the popularity of this paper in large part and to understand why, think “Augmented Reality“. AR is the futuristic idea most familiar to the average sci-fi enthusiast as Terminator-vision. Given the strong interest in the topic, AR could be closer than we think, but we’ll probably use it to layer Groupon deals over shops we pass by instead of building unstoppable fighting machines. 

5. Reinforcement Learning: An Introduction (available full-text)
This is another machine learning paper and its presence in the top 10 is primarily due to AI, with a small contribution from folks listing neural networks as their discipline, most likely due to the paper being published in IEEE Transactions on Neural Networks. Reinforcement learning is essentially a technique that borrows from biology, where the behavior of an intelligent agent is is controlled by the amount of positive stimuli, or reinforcement, it receives in an environment where there are many different interacting positive and negative stimuli. This is how we’ll teach the robots behaviors in a human fashion, before they rise up and destroy us. 

Popular among AI and information retrieval researchers, this paper discusses recommendation algorithms and classifies them into collaborative, content-based, or hybrid. While I wouldn’t call this paper a groundbreaking event of the caliber of the Shannon paper above, I can certainly understand why it makes such a strong showing here. If you’re using Mendeley, you’re using both collaborative and content-based discovery methods! 

7. A Mathematical Theory of Communication (available full-text)
Now we’re back to more fundamental papers. I would really have expected this to be at least number 3 or 4, but the strong showing by the AI discipline for the machine learning papers in spots 1, 4, and 5 pushed it down. This paper discusses the theory of sending communications down a noisy channel and demonstrates a few key engineering parameters, such as entropy, which is the range of states of a given communication. It’s one of the more fundamental papers of computer science, founding the field of information theory and enabling the development of the very tubes through which you received this web page you’re reading now. It’s also the first place the word “bit”, short for binary digit, is found in the published literature. 

8. The Semantic Web (available full-text)
In The Semantic Web, Tim Berners-Lee, Sir Tim, the inventor of the World Wide Web, describes his vision for the web of the future. Now, 10 years later, it’s fascinating to look back though it and see on which points the web has delivered on its promise and how far away we still remain in so many others. This is different from the other papers above in that it’s a descriptive piece, not primary research as above, but still deserves it’s place in the list and readership will only grow as we get ever closer to his vision. 

9. Convex Optimization (available full-text)
This is a very popular book on a widely used optimization technique in signal processing. Convex optimization tries to find the provably optimal solution to an optimization problem, as opposed to a nearby maximum or minimum. While this seems like a highly specialized niche area, it’s of importance to machine learning and AI researchers, so it was able to pull in a nice readership on Mendeley. Professor Boyd has a very popular set of video classes at Stanford on the subject, which probably gave this a little boost, as well. The point here is that print publications aren’t the only way of communicating your ideas. Videos of techniques at SciVee or JoVE or recorded lectures (previously) can really help spread awareness of your research. 

This is another paper on the same topic as paper #4, and it’s by the same author. Looking across subdisciplines as we did here, it’s not surprising to see two related papers, of interest to the main driving discipline, appear twice. Adding the readers from this paper to the #4 paper would be enough to put it in the #2 spot, just below the LDA paper. 

2015's Hottest Topics In Computer Science Research

There are several major closed-form challenges in Computer Science, such as
However, the hottest topics are broad and intentionally defined with some vagueness, to encourage out-of-the-box thinking. For such topics, zooming in on the right questions often marks significant progress in itself.

Here’s my list for 2015.

Abundant-data applications, algorithms, and architectures are a meta-topic that includes research avenues such as data mining (quickly finding relatively simple patterns in massive amounts of loosely structured data, evaluating and labeling data, etc), machine learning (building mathematical models that represent structure and statistical trends in data, with good predictive properties), hardware architectures to process more data than is possible today.

Artificial intelligence and robotics - broadly, figuring out how to formalize human capabilities, which currently appear beyond the reach of computers and robots, then make computers and robots more efficient at it. Self-driving cars and swarms of search-and-rescue robots are a good illustration. In the past, once good model were found for something (such as computer-aided design of electronic circuits), this research moves into a different field – the design of efficient algorithms, statistical models, computing hardware, etc.

Bio-informatics and other uses of CS in biology, biomedical engineering, and medicine, including systems biology (modeling interactions of multiple systems in a living organism, including immune systems and cancer development),  computational biophysics (modeling and understanding mechanical, electrical, and molecular-level interactions inside an organism),  computational neurobiology (understanding how organisms process incoming information and react to it, control their bodies, store information, and think). There is a very large gap between what is known about brain structure and the functional capabilities of a living brain – closing this gap is one of the grand challenged in modern science and engineering. DNA analysis and genetics have also become computer-based in the last 20 years. Biomedical engineering is another major area of growth, where microprocessor-based systems can monitor vital signs, and even administer life-saving medications without waiting for a doctor. Computer-aided design of prosthetics is very promising.

Computer-assisted education, especially at the high-school level. Even for CS, few high schools offer competent curriculum, even in developed countries. Cheat-proof automated support for exams and testing, essay grading, generation of multiple-choice questions. Support for learning specific skills, such as programming (immediate feedback on simple mistakes and suggestions on how to fix them, peer grading, style analysis).

Databases, data centers, information retrieval, and natural-language processing: collecting and storing massive collections of data and making them easily available (indexing, search), helping computers understand (structure in) human-generated documents and artifacts of all kinds (speech, video, text, motion, biometrics) and helping people search for the information they need when they need it. There are many interactions with abundant-data applications here, as well as with human-computer interaction, as well as with networking.

Emerging technologies for computing hardware, communication, and sensing: new models of computation (such as optical and quantum computing) and figuring out what they are [not] good for. Best uses for three-dimensional integrated circuits and a variety of new memory chips. Modeling and using new types of electronic switches (memristors, devices using carbon nano-tubes, etc), quantum communication and cryptography, and a lot more.

Human-computer interaction covers human-computer interface design and focused techniques that allow computers to understand people (detect emotions, intent, level of skill), as well as the design of human-facing software (social networks) and hardware (talking smart-phones and self-driving cars).
Large-scale networking: high-performance hardware for data centers, mobile networking, support for more efficient multicast, multimedia, and high-level user-facing services (social networks), networking services for developing countries (without permanent high-bandwidth connections), various policy issues (who should run the Internet and whether the governments should control it). Outer-space communication networks. Network security (which I also listed under Security) is also a big deal.

Limits of computation and communication at the level of problem types (some problems cannot be solved in principle!), algorithms (sometimes an efficient algorithm is unlikely to exist) and physical resources, especially space, time, energy and materials. This topic covers Complexity Theory from Theoretical CS, but also the practical obstacles faced by the designers of modern electronic systems, hinting at limits that have not yet been formalized.

Multimedia: graphics, audio (speech, music, ambient sound), video – analysis, compression, generation, playback, multi-channel communication etc. Both hardware and software are involved. Specific questions include scene analysis (describing what’s on the picture), comprehending movement, synthesizing realistic multimedia, etc.

Programming languages and environments: automated analysis of programs in terms of correctness and resource requirements, comparisons between languages, software support for languages (i.e., compilation), program optimization, support for parallel programming, domain-specific languages, interactions between languages, systems that assist programmers by inferring their intent.

Security of computer systems and support for digital democracy, including network-level security (intrusion detection and defense), OS-level security (anti-virus SW) and physical security (biometrics, tamper-proof packaging, trusted computing on untrusted platforms), support for personal privacy (efficient and user-friendly encryption), free speech (file sharing, circumventing sensors and network restrictions by oppressive regimes), as well as issues related to electronic polls and voting. Security is also a major issue in the use of embedded systems and the Internet of Things (IoT).

Verification, proofs, and automated debugging of hardware designs, software, networking protocols, mathematical theorems, etc. This includes formal reasoning (proof systems and new types of logical arguments), finding bugs efficiently and diagnosing them, finding bug fixes, and confirming the absence of bugs (usually by means of automated theorem-proving).
If something is not listed, it may still be a very worthwhile topic, but not necessarily “hot” right now, or perhaps lurking in my blind spot.

Now that you have a long answer, let’s revisit the question! Hotness usually refers to how easy it is to make impact in the field and how impactful the field is likely to be in the broader sense. For example, solving P vs. NP would be impactful and outright awesome, but also extremely unlikely to happen any time soon. So, new researchers are advised to stay away from such an established challenge. Quantum computing is roughly in the same category, although apparently the media and the masses have not realized this. On the positive side, applied physicists are building interesting new devices, producing results that are worthwhile by themselves. So, quantum information processing is a hot area in applied physics, but not in computer design.