Publications

Natural Language Processing

  • TBD

Computer Vision

  • K. Meshgi, M. S. Mirzaei, S. Oba, “Long and Short Memory Balancing in Visual Co-Tracking using Q-Learning,” in Proc. of ICIP’19, IEEE, Taipei, Taiwan, Sep 2019.
  • K. Meshgi, M. S. Mirzaei, S. Oba, “Information-Maximizing Sampling to Promote Tracking-by-Detection,” in Proc. of ICIP’18, IEEE, Athens, Greece, Oct 2018.
  • K. Meshgi, S. Oba, S. Ishii, “Efficient Diverse Ensemble for Discriminative Co-Tracking,” in Proc. of CVPR’18, IEEE, Salt Lake City, USA, Jun 2018.
  • K. Meshgi and S. Oba “Active Collaboration of Classifiers for Visual Tracking,” in G. Anbarjafari and S. Escalera (Eds) Human-Robot Interaction - Theory and Application, InTech Publication, ISBN 978-953-51-5611-6, 2018.
  • K. Meshgi, S.i. Maeda, S. Oba, S. Ishii, “Constructing a Meta-Tracker using Dropout to Imitate the Behavior of an Arbitrary Black-box Tracker,” Journal of Neural Networks vol. 87, pp. 132-148, Elsevier, 2017.
  • K. Meshgi, M. S. Mirzaei, S. Oba, S. Ishii, “Efficient Asymmetric Co-Tracking using Uncertainty Sampling,” in Proc. of ICSIPA’17, IEEE, Kuching, Malaysia, Sep 2017. (Best paper award)
  • K. Meshgi, M. S. Mirzaei, S. Oba, S. Ishii, “Active Collaborative Ensemble Tracking,” in Proc. of AVSS’17, IEEE, Lecce, Italy, Aug 2017.
  • K. Meshgi, S. Oba, S. Ishii, “Adversarial Sampling to Robustify Active Discriminative Co-Tracking,” in Proc. of MIRU’17, Hiroshima, Japan, Aug 2017.
  • K. Meshgi, S. Oba, S. Ishii, “Efficient Version-Space Reduction for Visual Tracking,” in Proc. of CRV’17, IEEE, Vancouver, Canada, May 2017.
  • K. Meshgi, S. Oba, S. Ishii, “Active Discriminative Tracking using Collective Memory,” in Proc. of MVA’17, IEEE, Tokyo, Japan, May 2017.
  • K. Meshgi, S. Oba, and S. Ishii, “Robust Discriminative Tracking via Query-by-Bagging,” in Proc. of AVSS’16, Colorado Springs, USA, Aug 2016.
  • K. Meshgi, S.i. Maeda, S. Oba, H. Skibbe, Y.Z. Li, S. Ishii, “Occlusion Aware Particle Filter Tracker to Handle Complex and Persistent Occlusions,” Journal of Computer Vision and Image Understanding (CVIU), vol. 150, pp. 81-94, Elsevier 2016.
  • K. Meshgi, S.i. Maeda, S. Oba, and S. Ishii, “Data-driven Probabilistic Occlusion Mask to Promote Visual Tracking,” in Proc. of CRV’16, IEEE, British Columbia, Canada, Jun 2016.
  • K. Meshgi, S. Ishii, “The State-of-the-Art in Handling Occlusions for Visual Object Tracking,” IEICE Transactions on Information and Systems, vol. E98-D, no. 7, pp. 1260-1274, IEICE 2015.
  • K. Meshgi, and S. Ishii, “Expanding Histogram of Colors with Gridding to Improve Tracking Accuracy,” in Proc. of MVA’15, IEEE, Tokyo, Japan, May 2015.
  • K. Meshgi, S. Maeda, S. Oba, S. Ishii, “Fusion of Multiple Cues from Color and Depth Domains using Occlusion Aware Bayesian Tracker, ” in IEICE Tech. Rep., vol. 113, no. 500, NC2014-22, pp. 127-132, Mar 2014.
  • K. Meshgi, Y.Z. Li, S. Oba, S. Maeda, S. Ishii, “Enhancing Probabilistic Appearance-Based Object Tracking with Depth Information: Object Tracking under Occlusion,” in IEICE Tech. Rep., vol. 113, no. 197, IBISML2013-22, pp. 85-91, Sep 2013.
  • K. Meshgi, “Particle Filter-based Tracking to Handle Persistent and Complex Occlusions and Imitate Arbitrary Black-box Trackers,” Ph.D. Dissertation, Kyoto University, Sep 2015.

Computational Linguistics

  • M. S. Mirzaei, K. Meshgi, “Learner Adaptive Partial and Synchronized Caption for L2 Listening Skill Development,” in Proc. of EuroCALL’19, Louvain-la-Neuve, Belgium, Aug 2019.
  • M. S. Mirzaei, K. Meshgi, “Toward Adaptive Partial and Synchronized Caption to Facilitate L2 Listening,” in Proc. of FLEAT VII, Tokyo, Japan, Aug 2019.
  • K. Meshgi, M. S. Mirzaei, “A comprehensive word difficulty index for L2 listening,” in Proc. of ExLing’18, Paris, France, Aug 2018.
  • K. Meshgi, M. S. Mirzaei, “A Data-driven Approach to Generate Partial and Synchronized Caption for Second Language Listeners,” in Proc. of WorldCALL’18, Concepcion, Chile, Nov 2018.
  • M. S. Mirzaei, K. Meshgi, “Automatic Scaffolding for L2 Listeners by Leveraging Natural Language Processing,” in Proc. of EuroCALL’18, Jyväskylä, Finland, Aug 2018.
  • M. S. Mirzaei, K. Meshgi, T. Kawahara, “Exploiting Automatic Speech Recognition Errors to Enhance Partial and Synchronized Caption for Facilitating Second Language Listening,” Computer Speech and Language Journal, vol. 49, pp. 17-36, Elsevier 2018.
  • M. S. Mirzaei, K. Meshgi, Y. Akita, T. Kawahara, “Partial and synchronized captioning: A new tool to assist learners in developing second language listening skill,” ReCALL Journal, vol. 29(2), pp. 178-199, Cambridge University Press 2017.
  • M. S. Mirzaei, K. Meshgi, T. Kawahara, “Detecting listening difficulty for second language learners using Automatic Speech Recognition errors,” in Proc. of SLaTE’17, Stockholm, Sweden, Aug 2017.
  • M. S. Mirzaei, K. Meshgi, T. Kawahara, “Listening Difficulty Detection to Foster Second Language Listening with Partial and Synchronized Caption,” in Proc. of EuroCALL’17, Southampton, England, Aug 2017.
  • M. S. Mirzaei, K. Meshgi, T. Kawahara, “Leveraging automatic speech recognition errors to detect challenging speech segments in TED talks,” in Proc. of EuroCALL’16, Limassol, Cyprus, Aug 2016.
  • M. S. Mirzaei, K. Meshgi, T. Kawahara, “Automatic speech recognition errors as a predictor of L2 listening difficulties,” in Proc. of Coling’16 (CL4LC Workshop), Osaka, Japan, Nov 2016.
  • M. S. Mirzaei, K. Meshgi, Y. Akita, T. Kawahara, “Errors in Automatic Speech Recognition versus Difficulties in Second Language Listening, ” in Proc. of EuroCALL’15, Padova, Italy, Aug 2015.

Machine Learning

  • S. Ebrahimi, K. Meshgi, S. Khadivi, S.E. Shiri Ahmad Abady, “Meta-level Statistical Machine Translation," in Proc. of 6th Int’l Joint Conf. on NLP (IJCNLP'13), Nagoya, Japan, Oct 2013.
  • S. MasoumZadeh, K. Meshgi, S. Shiry, G. Taghizadeh, “FQL-RED: An Adaptive Scalable Schema for Active Queue Management,” Int'l. J. of Network Mgmt (IJNM), vol. 21, pp.157-167 Wiley, 2011.
  • S. MasoumZadeh, G. Taghizadeh, K. Meshgi, S. Shiry, “Deep Blue: A Fuzzy Q-Learning Enhanced Active Queue Management Scheme,” in Proc. of Int'l. Conf. on Adaptive and Intelligent Systems (ICAIS'09), Klagenfurt, Austria, 2009.
  • S. MasoumZadeh, K. Meshgi, S. Shiry, “Adaptive Mutation in Evolution Strategy using Fuzzy Q-Learning,” in Proc. of 13th Iran Computer Association Conf. (ICCSC2008), Kish, Iran, 2008.
  • K. Meshgi, “Brain Inspired Face Detection,” M.Sc. Dissertation, Tehran PolyTechnic, Oct 2010.

Robotics

  • S. Soleimanpour, S. Shiry, K. Meshgi, “Sensor Fusion in Robot Localization using DS-Evidence Theory with Conflict Detection using Mahalanobis Distance,” Proc. of 7th IEEE Int'l. Conf. on Cybernetic Intelligent Systems (CIS'2008), United Kingdom, 2008.
  • Nemesis 2010 Team Description, RobuCup 2010.


Legend: Peer-reviewed, Not peer-reviewed, Under review/publication, Thesis

Activities

Honors

  • Won ICT Innovation Award for PSC, Kyoto University, 2017.
  • Awarded as Japan Ministry of Economy, Trade and Industry Prize for Winning NEDO Project, as a part of R&D Team, 3D MEDiA Co. Ltd., 2015.
  • Received Japan Government Monbokagakusho (MEXT) Scholarship from the Ministry of Education, Culture, Sports, Science and Technology, Government of Japan, 2011-2014.
  • Achieved 3rd Place of Int'l. RoboCup 2005 Competitions, Soccer Coach Simulation League, Member of Kasra Team, Osaka, Japan, 2005.
  • Recognized as an Exceptional Talent during Master's Program by Amirkabir University of Technology, 2010 (Ranked 1st in Fall 2008 & 2nd in Spring 2008 semesters in Artificial Intelligence Dept.)

Affiliations

Services

Projects

DEDT

Efficient Diverse Ensemble for Discriminative Co-Tracking (DEDT)

An active co-tracker with self-organizing committee of classifiers, considering diversity for model update

ACET

Adversarial Ensemble Co-Tracker (ACET)

An active self-correcting committee of classifiers to perform collaborative tracking

DQCF

Deep Q-Learning for Correlation Filter Tracking (DQCF)

A correlation filter tracker with reinforcement-learning-tuned learning rate

QBST

Query-by-Boosting Tracker (QBST)

a committee of weak classifiers in a Boosting framework

CMT

Collective Memory Tracker (CMT)

a committee of classifiers with the same data but different memory spans

QBT

Query-by-Bagging Tracker (QBT)

a committee of classifiers with partial knowledge

Mimic Tracker (MIMIC)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

IMST

Information-Maximizing Sampling Tracker (IMST)

A novel sampling technique to provide most informative samples for the classifiers to learn the target/non-target ever-changing boundary

OCCMASK

Data-Driven Probabilistic Occlusion Mask (OccMask)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

HOCX

Expanding Histogram of Colors with Gridding (HOCx)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

OAPFT

Occlusion Aware Particle Filter Tracker (OAPFT)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

ASCT

Adversarial Sampling Co-Tracker (ASCT)

An active co-tracker in which the main classifier is robustified against its adversarial examples by the assistance of the auxiliary detector

OCCSURVAY

The State-of-the-Art in Handling Occlusions for Visual Object tracking (OccSurvey)

A report on trending literature of occlusion handling in online visual tracking including solutions, datasets, benchmarks, and criteria.

PSC

Partial and Synchronized Caption (PSC)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

Long and Short Memory Balancing in Visual Co-Tracking using Q-Learning (QACT)

An adaptive active-co tracker, that uses Q-learning to balance the short vs. long memory usage and speed vs. accuracy trade-off

Brain Inspired Face Detection (B-FD)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

Fuzzy Q-Learning (FQL)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.

Virtual Reality Conversation Envinsioner (VRCE)

Aenean ornare velit lacus, ac varius enim lorem ullamcorper dolore. Proin aliquam facilisis ante interdum. Sed nulla amet lorem feugiat tempus aliquam.