Search results for: quantum programming languages.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 910

Search results for: quantum programming languages.

40 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
39 Mistranslation in Cross Cultural Communication: A Discourse Analysis on Former President Bush’s Speech in 2001

Authors: Lowai Abed

Abstract:

The differences in languages play a big role in cross-cultural communication. If meanings are not translated accurately, the risk can be crucial not only on an interpersonal level, but also on the international and political levels. The use of metaphorical language by politicians can cause great confusion, often leading to statements being misconstrued. In these situations, it is the translators who struggle to put forward the intended meaning with clarity and this makes translation an important field to study and analyze when it comes to cross-cultural communication. Owing to the growing importance of language and the power of translation in politics, this research analyzes part of President Bush’s speech in 2001 in which he used the word “Crusade” which caused his statement to be misconstrued. The research uses a discourse analysis of cross-cultural communication literature which provides answers supported by historical, linguistic, and communicative perspectives. The first finding indicates that the word ‘crusade’ carries different meaning and significance in the narratives of the Western world when compared to the Middle East. The second one is that, linguistically, maintaining cultural meanings through translation is quite difficult and challenging. Third, when it comes to the cross-cultural communication perspective, the common and frequent usage of literal translation is a sign of poor strategies being followed in translation training. Based on the example of Bush’s speech, this paper hopes to highlight the weak practices in translation in cross-cultural communication which are still commonly used across the world. Translation studies have to take issues such as this seriously and attempt to find a solution. In every language, there are words and phrases that have cultural, historical and social meanings that are woven into the language. Literal translation is not the solution for this problem because that strategy is unable to convey these meanings in the target language.

Keywords: Crusade, metaphor, mistranslation, war in terror.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
38 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur

Abstract:

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Keywords: Avatar, dictionary, HamNoSys, hearing-impaired, Indian Sign Language, sign language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1255
37 Explicit Solution of an Investment Plan for a DC Pension Scheme with Voluntary Contributions and Return Clause under Logarithm Utility

Authors: Promise A. Azor, Avievie Igodo, Esabai M. Ase

Abstract:

The paper merged the return of premium clause and voluntary contributions to investigate retirees’ investment plan in a defined contributory (DC) pension scheme with a portfolio comprising of a risk-free asset and a risky asset whose price process is described by geometric Brownian motion (GBM). The paper considers additional voluntary contributions paid by members, charge on balance by pension fund administrators and the mortality risk of members of the scheme during the accumulation period by introducing return of premium clause. To achieve this, the Weilbull mortality force function is used to establish the mortality rate of members during accumulation phase. Furthermore, an optimization problem from the Hamilton Jacobi Bellman (HJB) equation is obtained using dynamic programming approach. Also, the Legendre transformation method is used to transform the HJB equation which is a nonlinear partial differential equation to a linear partial differential equation and solves the resultant equation for the value function and the optimal distribution plan under logarithm utility function. Finally, numerical simulations of the impact of some important parameters on the optimal distribution plan were obtained and it was observed that the optimal distribution plan is inversely proportional to the initial fund size, predetermined interest rate, additional voluntary contributions, charge on balance and instantaneous volatility.

Keywords: Legendre transform, logarithm utility, optimal distribution plan, return clause of premium, charge on balance, Weibull mortality function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129
36 Effect of Silver Nanoparticles on Seed Germination of Crop Plants

Authors: Zainab M. Almutairi, Amjad Alharbi

Abstract:

The use of engineered nanomaterials has increased as a result of their positive impact on many sectors of the economy, including agriculture. Silver nanoparticles (AgNPs) are now used to enhance seed germination, plant growth, and photosynthetic quantum efficiency and as antimicrobial agents to control plant diseases. In this study, we examined the effect of AgNP dosage on the seed germination of three plant species: corn (Zea mays L.), watermelon (Citrullus lanatus [Thunb.] Matsum. & Nakai) and zucchini (Cucurbita pepo L.). This experiment was designed to study the effect of AgNPs on germination percentage, germination rate, mean germination time, root length and fresh and dry weight of seedlings for the three species. Seven concentrations (0.05, 0.1, 0.5, 1, 1.5, 2 and 2.5 mg/ml) of AgNPs were examined at the seed germination stage. The three species had different dose responses to AgNPs in terms of germination parameters and the measured growth characteristics. The germination rates of the three plants were enhanced in response to AgNPs. Significant enhancement of the germination percentage values was observed after treatment of the watermelon and zucchini plants with AgNPs in comparison with untreated seeds. AgNPs showed a toxic effect on corn root elongation, whereas watermelon and zucchini seedling growth were positively affected by certain concentrations of AgNPs. This study showed that exposure to AgNPs caused both positive and negative effects on plant growth and germination.

Keywords: Citrullus lanatus, Cucurbita pepo, seed germination, seedling growth, silver nanoparticles, Zea mays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6309
35 Effect of Silver Nanoparticles on Seed Germination of Crop Plants

Authors: Zainab M. Almutairi, Amjad Alharbi

Abstract:

The use of engineered nanomaterials has increased as a result of their positive impact on many sectors of the economy, including agriculture. Silver nanoparticles (AgNPs) are now used to enhance seed germination, plant growth, and photosynthetic quantum efficiency and as antimicrobial agents to control plant diseases. In this study, we examined the effect of AgNP dosage on the seed germination of three plant species: corn (Zea mays L.), watermelon (Citrullus lanatus [Thunb.] Matsum. & Nakai) and zucchini (Cucurbita pepo L.). This experiment was designed to study the effect of AgNPs on germination percentage, germination rate, mean germination time, root length and fresh and dry weight of seedlings for the three species. Seven concentrations (0.05, 0.1, 0.5, 1, 1.5, 2 and 2.5 mg/ml) of AgNPs were examined at the seed germination stage. The three species had different dose responses to AgNPs in terms of germination parameters and the measured growth characteristics. The germination rates of the three plants were enhanced in response to AgNPs. Significant enhancement of the germination percentage values was observed after treatment of the watermelon and zucchini plants with AgNPs in comparison with untreated seeds. AgNPs showed a toxic effect on corn root elongation, whereas watermelon and zucchini seedling growth were positively affected by certain concentrations of AgNPs. This study showed that exposure to AgNPs caused both positive and negative effects on plant growth and germination.

Keywords: Citrullus lanatus, Cucurbita pepo, seed germination, seedling growth, silver nanoparticles, Zea mays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2515
34 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: Embedded code generation, embedded C code quality, embedded systems, model-based development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 944
33 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modeling and Solving

Authors: Yasin Tadayonrad, Alassane Ballé Ndiaye

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading/unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is the loading/unloading capacity in each source/destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods (FMCG) industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on Python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: Supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444
32 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors

Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar

Abstract:

Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.

Keywords: Electrophoretic deposition, graphene oxide, electrical conductivity, electro-optical devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
31 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: Database, forensic genetics, genetic analysis, sample management, software solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1101
30 A Face-to-Face Education Support System Capable of Lecture Adaptation and Q&A Assistance Based On Probabilistic Inference

Authors: Yoshitaka Fujiwara, Jun-ichirou Fukushima, Yasunari Maeda

Abstract:

Keys to high-quality face-to-face education are ensuring flexibility in the way lectures are given, and providing care and responsiveness to learners. This paper describes a face-to-face education support system that is designed to raise the satisfaction of learners and reduce the workload on instructors. This system consists of a lecture adaptation assistance part, which assists instructors in adapting teaching content and strategy, and a Q&A assistance part, which provides learners with answers to their questions. The core component of the former part is a “learning achievement map", which is composed of a Bayesian network (BN). From learners- performance in exercises on relevant past lectures, the lecture adaptation assistance part obtains information required to adapt appropriately the presentation of the next lecture. The core component of the Q&A assistance part is a case base, which accumulates cases consisting of questions expected from learners and answers to them. The Q&A assistance part is a case-based search system equipped with a search index which performs probabilistic inference. A prototype face-to-face education support system has been built, which is intended for the teaching of Java programming, and this approach was evaluated using this system. The expected degree of understanding of each learner for a future lecture was derived from his or her performance in exercises on past lectures, and this expected degree of understanding was used to select one of three adaptation levels. A model for determining the adaptation level most suitable for the individual learner has been identified. An experimental case base was built to examine the search performance of the Q&A assistance part, and it was found that the rate of successfully finding an appropriate case was 56%.

Keywords: Bayesian network, face-to-face education, lecture adaptation, Q&A assistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1320
29 Auto-Calibration and Optimization of Large-Scale Water Resources Systems

Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari

Abstract:

Water resource systems modeling has constantly been a challenge through history for human beings. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.

Keywords: Auto-calibration, Gilan, Large-Scale Water Resources, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
28 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 629
27 Crossover Memories and Code-Switching in the Narratives of Arabic-Hebrew and Hebrew-English Bilingual Adults in Israel

Authors: Amani Jaber-Awida

Abstract:

This study examines two bilingual phenomena in the narratives of Arabic Hebrew and Hebrew-English bilingual adults in Israel: CO memories and code-switching (CS). The study examined these phenomena in the context of autobiographical memory, using a cue word technique. Student experimenters held two sessions in the homes of the participants. In separate language sessions, the participant was asked to look first at each of 16 cue words and then to state a concrete memory. After stating the memory, participants reported whether their memories were in the same language of the experiment session or different. Memories were classified as ‘Crossovers’ (CO) or ‘Same Language’ (SL) according to participants' self-reports. Participants were also required to elaborate about the setting, interlocutors and other languages involved in the specific memory. Beyond replicating the procedure of cuing technique, one memory from a specific lifespan period was chosen per participant, and the participant was required to provide further details about it. For the more detailed memories, CS count was conducted. Both bilingual groups confirmed the Reminiscence Bump phenomenon, retrieving more memories in the 10-30 age period. CO memories prevailed in second language sessions (L2). Same language memories were more abundant in first language sessions (L1). Higher CS frequency was found in L2 sessions. Finally, as predicted, 'individual' CS was prevalent in L2 sessions, but 'community-based' CS was not higher in L1 sessions. The two bilingual measures in this study, crossovers, and CS came from different research traditions, the former from an experimental paradigm in the psychology of autobiographical memory based on self-reported judgments, the latter a behavioral measure from linguistics. This merger of approaches offers new insight into the field of bilingual autobiographical memory. In addition, the study attempted to shed light on the investigation of motivations for CS, beginning with Walters’ SPPL Model and concluding with a distinction between ‘community-based’ and individual motivations.

Keywords: Autobiographical memory, code-switching, crossover memories, reminiscence bump.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 721
26 Language Politics and Identity in Translation: From a Monolingual Text to Multilingual Text in Chinese Translations

Authors: Chu-Ching Hsu

Abstract:

This paper focuses on how the government-led language policies and the political changes in Taiwan manipulate the languages choice in translations and what translation strategies are employed by the translator to show his or her language ideology behind the power struggles and decision-making. Therefore, framed by Lefevere’s theoretical concept of translating as rewriting, and carried out a diachronic and chronological study, this paper specifically sets out to investigate the language ideology and translator’s idiolect of Chinese language translations of Anglo-American novels. The examples drawn to explore these issues were taken from different versions of Chinese renditions of Mark Twain’s English-language novel The Adventures of Huckleberry Finn in which there are several different dialogues originally written in the colloquial language and dialect used in the American state of Mississippi and reproduced in Mark Twain’s works. Also, adapted corpus methodology, many examples are extracted as instances from the translated texts and source text, to illuminate how the translators in Taiwan deal with the dialectal features encoded in Twain’s works, and how different versions of Chinese translations are employed by Taiwanese translators to confirm the language polices and to express their language identity textually in different periods of the past five decades, from the 1960s onward. The finding of this study suggests that the use of Taiwanese dialect and language patterns in translations does relate to the movement of the mother-tongue language and language ideology of the translator as well as to the issue of language identity raised in the island of Taiwan. Furthermore, this study confirms that the change of political power in Taiwan does bring significantly impact in language policy-- assimilationism, pluralism or multiculturalism, which also makes Taiwan from a monolingual to multilingual society, where the language ideology and identity can be revealed not only in people’s daily communication but also in written translations.

Keywords: Language politics and policies, literary translation, mother-tongue, multiculturalism, translator’s ideology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
25 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock

Abstract:

Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 758
24 A Paradigm Shift towards Personalized and Scalable Product Development and Lifecycle Management Systems in the Aerospace Industry

Authors: David E. Culler, Noah D. Anderson

Abstract:

Integrated systems for product design, manufacturing, and lifecycle management are difficult to implement and customize. Commercial software vendors, including CAD/CAM and third party PDM/PLM developers, create user interfaces and functionality that allow their products to be applied across many industries. The result is that systems become overloaded with functionality, difficult to navigate, and use terminology that is unfamiliar to engineers and production personnel. For example, manufacturers of automotive, aeronautical, electronics, and household products use similar but distinct methods and processes. Furthermore, each company tends to have their own preferred tools and programs for controlling work and information flow and that connect design, planning, and manufacturing processes to business applications. This paper presents a methodology and a case study that addresses these issues and suggests that in the future more companies will develop personalized applications that fit to the natural way that their business operates. A functioning system has been implemented at a highly competitive U.S. aerospace tooling and component supplier that works with many prominent airline manufacturers around the world including The Boeing Company, Airbus, Embraer, and Bombardier Aerospace. During the last three years, the program has produced significant benefits such as the automatic creation and management of component and assembly designs (parametric models and drawings), the extensive use of lightweight 3D data, and changes to the way projects are executed from beginning to end. CATIA (CAD/CAE/CAM) and a variety of programs developed in C#, VB.Net, HTML, and SQL make up the current system. The web-based platform is facilitating collaborative work across multiple sites around the world and improving communications with customers and suppliers. This work demonstrates that the creative use of Application Programming Interface (API) utilities, libraries, and methods is a key to automating many time-consuming tasks and linking applications together.

Keywords: CAD/CAM, CAPP, PDM, PLM, Scalable Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
23 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general-purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: Heuristic, MIP model, Remedial course, School, Timetabling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
22 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis

Authors: Isao Taguchi, Yasuo Sugai

Abstract:

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.

Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
21 A Modern Review of the Spintronic Technology: Fundamentals, Materials, Devices, Circuits, Challenges, and Current Research Trends

Authors: Muhibul Haque Bhuyan

Abstract:

Spintronic, also termed spin electronics or spin transport electronics, is a kind of new technology, which exploits the two fundamental degrees of freedom- spin-state and charge-state of electrons to enhance the operational speed for the data storage and transfer efficiency of the device. Thus, it seems an encouraging technology to combat most of the prevailing complications in orthodox electron-based devices. This novel technology possesses the capacity to mix the semiconductor microelectronics and magnetic devices’ functionalities into one integrated circuit. Traditional semiconductor microelectronic devices use only the electronic charge to process the information based on binary numbers, 0 and 1. Due to the incessant shrinking of the transistor size, we are reaching the final limit of 1 nm or so. At this stage, the fabrication and other device operational processes will become challenging as the quantum effect comes into play. In this situation, we should find an alternative future technology, and spintronic may be such technology to transfer and store information. This review article provides a detailed discussion of the spintronic technology: fundamentals, materials, devices, circuits, challenges, and current research trends. At first, the fundamentals of spintronics technology are discussed. Then types, properties, and other issues of the spintronic materials are presented. After that, fabrication and working principles, as well as application areas and advantages/disadvantages of spintronic devices and circuits, are explained. Finally, the current challenges, current research areas, and prospects of spintronic technology are highlighted. This is a new paradigm of electronic cum magnetic devices built on the charge and spin of the electrons. Modern engineering and technological advances in search of new materials for this technology give us hope that this would be a very optimistic technology in the upcoming days.

Keywords: Spintronic technology, spin, charge, magnetic devices, spintronic devices, spintronic materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 663
20 The Representation of Female Characters by Women Directors in Surveillance Spaces in Turkish Cinema

Authors: Berceste Gülçin Özdemir

Abstract:

The representation of women characters in cinema has been discussed for centuries. In cinema where dominant narrative codes prevail and scopophilic views exist over women characters, passive stereotypes of women are observed in the representation of women characters. In films shot from a woman’s point of view in Turkish Cinema and even in the films outside the main stream in which the stories of women characters are told, the fact that women characters are discussed on the basis of feminist film theories triggers the question: ‘Are feminist films produced in Turkish Cinema?’ The spaces that are used in the representation of women characters are observed to be used as spaces that convert characters into passive subjects on the basis of the space factor in the narrative. The representation of women characters in the possible surveillance spaces integrates the characters and compresses them in these spaces. In this study, narrative analysis was used to investigate women characters representation in the surveillance spaces. For the study framework, firstly a case study films are selected, and in the second level, women characters representations in surveillance spaces are argued by narrative analysis using feminist film theories. Two questions are argued with feminist film theories: ‘Why do especially women directors represent their female characters to viewers by representing them in surveillance spaces?’ and ‘Can this type of presentation contribute to the feminist film practice and become important with regard to feminist film theories?’ The representation of women characters in a passive and observed way in surveillance spaces of the narrative reveals the questioning of also the discourses of films outside of the main stream. As films that produce alternative discourses and reveal different cinematic languages, those outside the main stream are expected to bring other points of view also to the representation of women characters in spaces. These questionings are selected as the baseline and Turkish films such as Watch Tower and Mustang, directed by women, were examined. This examination paves the way for discussions regarding the women characters in surveillance spaces. Outcomes can be argued from the viewpoint of representation in the genre by feminist film theories. In the context of feminist film theories and feminist film practice, alternatives should be found that can corporally reveal the existence of women in both the representation of women characters in spaces and in the usage of the space factor.

Keywords: Feminist film theory, representation, space, women filmmaker, women characters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
19 Validity of Universe Structure Conception as Nested Vortexes

Authors: Khaled M. Nabil

Abstract:

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Keywords: Astrophysics, cosmology, particles’ structure model, particles’ forces, vortex dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
18 International E-Learning for Assuring Ergonomic Working Conditions of Orthopaedic Surgeons: First Research Outcomes from Train4OrthoMIS

Authors: J. Bartnicka, J. A. Piedrabuena, R. Portilla, L. Moyano - Cuevas, J. B. Pagador, P. Augat, J. Tokarczyk, F. M. Sánchez Margallo

Abstract:

Orthopaedic surgeries are characterized by a high degree of complexity. This is reflected by four main groups of resources: 1) surgical team which is consisted of people with different competencies, educational backgrounds and positions; 2) information and knowledge about medical and technical aspects of surgery; 3) medical equipment including surgical tools and materials; 4) space infrastructure which is important from an operating room layout point of view. These all components must be integrated and build a homogeneous organism for achieving an efficient and ergonomically correct surgical workflow. Taking this as a background, there was formulated a concept of international project, called “Online Vocational Training course on ergonomics for orthopaedic Minimally Invasive” (Train4OrthoMIS), which aim is to develop an e-learning tool available in 4 languages (English, Spanish, Polish and German). In the article, there is presented the first project research outcomes focused on three aspects: 1) ergonomic needs of surgeons who work in hospitals around different European countries, 2) the concept of structure of e-learning course, 3) the definition of tools and methods for knowledge assessment adjusted to users’ expectation.  The methodology was based on the expert panels and two types of surveys: 1) on training needs, 2) on evaluation and self-assessment preferences. The major findings of the study allowed describing the subjects of four training modules and learning sessions. According to peoples’ opinion there were defined most expected test methods which are single choice test and right after quizzes: “True or False” and “Link elements” The first project outcomes confirmed the necessity of creating a universal training tool for orthopaedic surgeons regardless of the country in which they work. Because of limited time that surgeons have, the e-learning course should be strictly adjusted to their expectation in order to be useful.

Keywords: International e-learning, ergonomics, orthopaedic surgery, Train4OrthoMIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
17 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2485
16 An Unified Approach to Thermodynamics of Power Yield in Thermal, Chemical and Electrochemical Systems

Authors: S. Sieniutycz

Abstract:

This paper unifies power optimization approaches in various energy converters, such as: thermal, solar, chemical, and electrochemical engines, in particular fuel cells. Thermodynamics leads to converter-s efficiency and limiting power. Efficiency equations serve to solve problems of upgrading and downgrading of resources. While optimization of steady systems applies the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. In reacting systems chemical affinity constitutes a prevailing component of an overall efficiency, thus the power is analyzed in terms of an active part of chemical affinity. The main novelty of the present paper in the energy yield context consists in showing that the generalized heat flux Q (involving the traditional heat flux q plus the product of temperature and the sum products of partial entropies and fluxes of species) plays in complex cases (solar, chemical and electrochemical) the same role as the traditional heat q in pure heat engines. The presented methodology is also applied to power limits in fuel cells as to systems which are electrochemical flow engines propelled by chemical reactions. The performance of fuel cells is determined by magnitudes and directions of participating streams and mechanism of electric current generation. Voltage lowering below the reversible voltage is a proper measure of cells imperfection. The voltage losses, called polarization, include the contributions of three main sources: activation, ohmic and concentration. Examples show power maxima in fuel cells and prove the relevance of the extension of the thermal machine theory to chemical and electrochemical systems. The main novelty of the present paper in the FC context consists in introducing an effective or reduced Gibbs free energy change between products p and reactants s which take into account the decrease of voltage and power caused by the incomplete conversion of the overall reaction.

Keywords: Power yield, entropy production, chemical engines, fuel cells, exergy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
15 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase

Authors: Samart Homthong, Wutthipong Moungnoi

Abstract:

Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.

Keywords: Critical success factors, procurement phase, project life cycle, project performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
14 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators

Authors: Radwa Mabrook

Abstract:

Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.

Keywords: Collaborative culture, content creation, experimental culture, virtual reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
13 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3526
12 Single Ion Transport with a Single-Layer Graphene Nanopore

Authors: Vishal V. R. Nandigana, Mohammad Heiranian, Narayana R. Aluru

Abstract:

Graphene material has found tremendous applications in water desalination, DNA sequencing and energy storage. Multiple nanopores are etched to create opening for water desalination and energy storage applications. The nanopores created are of the order of 3-5 nm allowing multiple ions to transport through the pore. In this paper, we present for the first time, molecular dynamics study of single ion transport, where only one ion passes through the graphene nanopore. The diameter of the graphene nanopore is of the same order as the hydration layers formed around each ion. Analogous to single electron transport resulting from ionic transport is observed for the first time. The current-voltage characteristics of such a device are similar to single electron transport in quantum dots. The current is blocked until a critical voltage, as the ions are trapped inside a hydration shell. The trapped ions have a high energy barrier compared to the applied input electrical voltage, preventing the ion to break free from the hydration shell. This region is called “Coulomb blockade region”. In this region, we observe zero transport of ions inside the nanopore. However, when the electrical voltage is beyond the critical voltage, the ion has sufficient energy to break free from the energy barrier created by the hydration shell to enter into the pore. Thus, the input voltage can control the transport of the ion inside the nanopore. The device therefore acts as a binary storage unit, storing 0 when no ion passes through the pore and storing 1 when a single ion passes through the pore. We therefore postulate that the device can be used for fluidic computing applications in chemistry and biology, mimicking a computer. Furthermore, the trapped ion stores a finite charge in the Coulomb blockade region; hence the device also acts a super capacitor.

Keywords: Graphene, single ion transport, Coulomb blockade, fluidic computer, super capacitor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
11 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: Multi-objective decision support, analysis, data validation, freight delivery, multi-modal transportation, genetic programming methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 437