SMP 2017 讲习班


讲习班讲者

中山大学 国家治理研究院副院长  梁玉成  教授


专题主题:计算社会学的理论与方法
专题摘要:不同于传统社会科学所依赖的调查问卷,来自社交网络的电子行为踪迹呈现了微观,异质,实时,大规模,和相互关联等特征。在此基础之上,基于互联网的大数据,以及传统的问卷调查与行政大数据结合,都成为新的研究平台,帮助学者来认识从人类行为和社会原理。计算社会科学属跨学科的新领域。许多重要的工作来自计算机科学,物理学和数学。我将介绍这些跨学科的方法,主要包括传统调查数据与大数据结合的法则、跨越社会宏观与微观结构的社会计算、基于文本数据的社会理论研究等。
嘉宾简介:中山大学物理学学士、社会学硕士;香港科技大学社会学博士;约翰霍普金斯大学社会学系访问教授。目前系中山大学社会学系教授,国家治理研究院副院长,社会科学调查中心主任,主持国家社科重大课题等课题10多项。获得教育部优秀社科成果奖二等奖、三等奖各一次;广东省优秀社科成果一等奖一次。目前系中山大学计算社会科学大团队负责人。研究领域包括社会调查、基于政府行政大数据的社会治理等。


南京大学新闻传播学院  王成军  副教授                 北京师范大学艺术与传媒学院  张伦  副教授

    


专题主题:计算社会科学视角下的计算传播学
专题摘要:基因是生物学飞跃的原因,货币是经济学发展的关键。人类传播行为所隐藏的计算化“基因”是什么?计算传播学是计算社会科学的重要分支。它致力于寻找传播学可计算化的基因,以传播网络分析、传播文本挖掘、数据科学等为主要分析工具,大规模地收集并分析人类传播行为数据,挖掘人类传播行为背后的模式和法则,分析模式背后的生成机制与基本原理,可以被广泛地应用于数据新闻和计算广告等场景。注重编程训练、数学建模与计算思维。本次讲座将介绍计算传播学的概念、内涵、应用、工具,并讨论如何开展跨学科合作、计算传播学的研究策略等问题。
嘉宾简介:王成军,传播学博士。现为南京大学新闻传播学院副教授,奥美数据科学实验室主任,计算传播学实验中心副主任。参与翻译《社会网络分析:方法与实践》(2013)、合著《社交网络上的计算传播学》(2015)。其研究兴趣聚焦于采用计算社会科学视角分析人类传播行为,研究成果发表于SSCI和SCI索引的期刊,例如Scientific Reports、PloS ONE、Physica A、Cyberpsychology。2014年,发起创建计算传播网 computational-communication.com。
       张伦,传播学博士,北京师范大学数字媒体系副教授。主要研究方向为基于数据挖掘方法的新媒体信息传播,即以传播网络分析、传播文本挖掘、数据科学等为主要分析工具,大规模地收集并分析人类传播行为数据,挖掘人类传播行为背后的模式和法则,分析模式背后的生成机制与基本原理。于SSCI、SCI以及CSSCI索引期刊发表论文18篇,其中SSCI期刊论文5篇,SCI期刊论文1篇,CSSCI期刊论文12篇。合著出版《社交网络上的计算传播学》(高等教育出版社, 2015年)一书。


HEC Montreal & MILA  Jian Tang  Ph.D


专题主题:Learning Representations of Large-scale Networks
专题摘要:Large-scale networks such as social networks, citation networks, the World Wide Web, and traffic networks are ubiquitous in the real world. Networks can also be constructed from text, time series, behavior logs, and many other types of data. Mining network data attracts increasing attention in academia and industry, covers a variety of applications, and influences the methodology of mining many types of data. A prerequisite to network mining is to find an effective representation of networks, which largely determines the performance of downstream data mining tasks. Traditionally, networks are usually represented as adjacency matrices, which suffer from data sparsity and high-dimensionality. Recently, there is a fast-growing interest in learning continuous and low-dimensional representations of networks. This is a challenging problem for multiple reasons: (1) networks data (nodes and edges) are sparse, discrete, and globally interactive; (2) real-world networks are very large, usually containing millions of nodes and billions of edges; and (3) real-world networks are heterogeneous. Edges can be directed, undirected or weighted, and both nodes and edges may carry different semantics.
In this tutorial, we will introduce the recent progress on learning continuous and low-dimensional representations of large-scale networks. This includes methods that learn the embeddings of nodes, methods that learn representations of larger graph structures (e.g., an entire network), and methods that layout very large networks on extremely low (2D or 3D) dimensional spaces. We will introduce methods for learning different types of node representations: representations that can be used as features for node classification, community detection, link prediction, and network visualization. We will introduce end-to-end methods that learn the representation of the entire graph structure through directly optimizing tasks such as information cascade prediction, chemical compound classification, and protein structure classification, using deep neural networks. We will highlight open source implementations of these techniques.
嘉宾简介:Dr. Jian Tang will be joining the department of decision science, HEC Montreal, as an assistant professor starting from this fall. He will also be a faculty member of Montreal Institute for Learning Algorithms (MILA), which is the deep learning group lead by one of the deep learning pioneers Yoshua Bengio. His research interests are deep learning, reinforcement learning, statistical topic modelling with various applications. He was a research fellow in University of Michigan and Carnegie Mellon University. He received his Ph.D degree from Peking University and was an associate researcher in Microsoft Research Asia. He received the best paper award of ICML’14 and nominated for the best paper of WWW’16. He is a PC member of many prestigious conferences such as IJCAI, AAAI, ACL, EMNLP, WWW, WSDM, and KDD.


Tsinghua University  Peng Cui  Associate Professor


专题主题:Network Embedding: Enabling Network Analytics and Inference in Vector Space
专题摘要:Nowadays, larger and larger, more and more sophisticated networks are used in more and more applications. It is well recognized that network data is sophisticated and challenging. To process graph data effectively, the first critical challenge is network data representation, that is, how to represent networks properly so that advanced analytic tasks, such as pattern discovery, analysis and prediction, can be conducted efficiently in both time and space. In this tutorial, we will review the recent thoughts and achievements on network embedding. More specifically, a series of fundamental problems in network embedding will be discussed, including why we need to revisit network representation, what are the research goals of network embedding, how network embedding can be learned, and the major future directions of network embedding.
嘉宾简介:Peng Cui is an Associate Professor in Tsinghua University. He got his PhD degree from Tsinghua University in 2010. His research interests include network representation learning, social dynamics modeling and human behavioral modeling. He has published more than 60 papers in prestigious conferences and journals in data mining and multimedia. His recent research won the ICDM 2015 Best Student Paper Award, SIGKDD 2014 Best Paper Finalist, IEEE ICME 2014 Best Paper Award, ACM MM12 Grand Challenge Multimodal Award, and MMM13 Best Paper Award. He is the Area Chair of ICDM 2016, ACM MM 2014-2015, IEEE ICME 2014-2015, ICASSP 2013, Associate Editor of IEEE TKDE, ACM TOMM, Elsevier Journal on Neurocomputing. He was the recipient of ACM China Rising Star Award in 2015. More details.