Search results for: time workflow network
18805 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method
Authors: Amira Mabrouk, Chokri Abdennadher
Abstract:
The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.Keywords: willingness to pay, contingent valuation, time value, city toll
Procedia PDF Downloads 43418804 Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods
Authors: Ali Berkan Ural
Abstract:
This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems.Keywords: computer aided diagnosis, breast cancer, region growing, segmentation, deep learning
Procedia PDF Downloads 9618803 Technology in the Calculation of People Health Level: Design of a Computational Tool
Authors: Sara Herrero Jaén, José María Santamaría García, María Lourdes Jiménez Rodríguez, Jorge Luis Gómez González, Adriana Cercas Duque, Alexandra González Aguna
Abstract:
Background: Health concept has evolved throughout history. The health level is determined by the own individual perception. It is a dynamic process over time so that you can see variations from one moment to the next. In this way, knowing the health of the patients you care for, will facilitate decision making in the treatment of care. Objective: To design a technological tool that calculates the people health level in a sequential way over time. Material and Methods: Deductive methodology through text analysis, extraction and logical knowledge formalization and education with expert group. Studying time: September 2015- actually. Results: A computational tool for the use of health personnel has been designed. It has 11 variables. Each variable can be given a value from 1 to 5, with 1 being the minimum value and 5 being the maximum value. By adding the result of the 11 variables we obtain a magnitude in a certain time, the health level of the person. The health calculator allows to represent people health level at a time, establishing temporal cuts being useful to determine the evolution of the individual over time. Conclusion: The Information and Communication Technologies (ICT) allow training and help in various disciplinary areas. It is important to highlight their relevance in the field of health. Based on the health formalization, care acts can be directed towards some of the propositional elements of the concept above. The care acts will modify the people health level. The health calculator allows the prioritization and prediction of different strategies of health care in hospital units.Keywords: calculator, care, eHealth, health
Procedia PDF Downloads 26418802 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping
Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting
Abstract:
Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator
Procedia PDF Downloads 25018801 A Unique Exact Approach to Handle a Time-Delayed State-Space System: The Extraction of Juice Process
Authors: Mohamed T. Faheem Saidahmed, Ahmed M. Attiya Ibrahim, Basma GH. Elkilany
Abstract:
This paper discusses the application of Time Delay Control (TDC) compensation technique in the juice extraction process in a sugar mill. The objective is to improve the control performance of the process and increase extraction efficiency. The paper presents the mathematical model of the juice extraction process and the design of the TDC compensation controller. Simulation results show that the TDC compensation technique can effectively suppress the time delay effect in the process and improve control performance. The extraction efficiency is also significantly increased with the application of the TDC compensation technique. The proposed approach provides a practical solution for improving the juice extraction process in sugar mills using MATLAB Processes.Keywords: time delay control (TDC), exact and unique state space model, delay compensation, Smith predictor.
Procedia PDF Downloads 9218800 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 18818799 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa
Authors: Olumuyiwa Ojo, Masengo Ilunga
Abstract:
Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.Keywords: ANN, artificial neural network, wastewater treatment, model, development
Procedia PDF Downloads 14918798 Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV
Authors: Manjit Singh
Abstract:
Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.Keywords: ECG (electrocardiogram), heart rate variability (HRV), multiscale entropy, sampling frequency
Procedia PDF Downloads 27118797 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt
Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer
Abstract:
Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening
Procedia PDF Downloads 5218796 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method
Authors: Lee Yan Nian
Abstract:
Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation
Procedia PDF Downloads 12318795 Structural Strength Potentials of Nigerian Groundnut Husk Ash as Partial Cement Replacement in Mortar
Authors: F. A. Olutoge, O.R. Olulope, M. O. Odelola
Abstract:
This study investigates the strength potentials of groundnut husk ash as partial cement replacement in mortar and also develops a predictive model using Artificial Neural Network. Groundnut husks sourced from Ogbomoso, Nigeria, was sun dried, calcined to ash in a furnace at a controlled temperature of 600⁰ C for a period of 6 hours, and sieved through the 75 microns. The ash was subjected to chemical analysis and setting time test. Fine aggregate (sand) for the mortar was sourced from Ado Ekiti, Nigeria. The cement: GHA constituents were blended in ratios 100:0, 95:5, 90:10, 85:15 and 80:20 %. The sum of SiO₂, Al₂O₃, and Fe₂O₃ content in GHA is 26.98%. The compressive strength for mortars PC, GHA5, GHA10, GHA15, and GHA20 ranged from 6.3-10.2 N/mm² at 7days, 7.5-12.3 N/mm² at 14 days, 9.31-13.7 N/mm² at 28 days, 10.4-16.7 N/mm² at 56days and 13.35- 22.3 N/mm² at 90 days respectively, PC, GHA5 and GHA10 had competitive values up to 28 days, but GHA10 gave the highest values at 56 and 90 days while GHA20 had the lowest values at all ages due to dilution effect. Flexural strengths values at 28 days ranged from 1.08 to 1.87 N/mm² and increased to a range of 1.53-4.10 N/mm² at 90 days. The ANN model gave good prediction for compressive strength of the mortars. This study has shown that groundnut husk ash as partial cement replacement improves the strength properties of mortar.Keywords: compressive strength, groundnut husk ash, mortar, pozzolanic index
Procedia PDF Downloads 15518794 A 15 Minute-Based Approach for Berth Allocation and Quay Crane Assignment
Authors: Hoi-Lam Ma, Sai-Ho Chung
Abstract:
In traditional integrated berth allocation with quay crane assignment models, time dimension is usually assumed in hourly based. However, nowadays, transshipment becomes the main business to many container terminals, especially in Southeast Asia (e.g. Hong Kong and Singapore). In these terminals, vessel arrivals are usually very frequent with small handling volume and very short staying time. Therefore, the traditional hourly-based modeling approach may cause significant berth and quay crane idling, and consequently cannot meet their practical needs. In this connection, a 15-minute-based modeling approach is requested by industrial practitioners. Accordingly, a Three-level Genetic Algorithm (3LGA) with Quay Crane (QC) shifting heuristics is designed to fulfill the research gap. The objective function here is to minimize the total service time. Preliminary numerical results show that the proposed 15-minute-based approach can reduce the berth and QC idling significantly.Keywords: transshipment, integrated berth allocation, variable-in-time quay crane assignment, quay crane assignment
Procedia PDF Downloads 16918793 Implementation of the Canadian Emergency Department Triage and Acuity Scale (CTAS) in an Urgent Care Center in Saudi Arabia
Authors: Abdullah Arafat, Ali Al-Farhan, Amir Omair
Abstract:
Objectives: To review and assess the effectiveness of the implemented modified five-levels triage and acuity scale triage system in AL-Yarmook Urgent Care Center (UCC), King Abdulaziz Residential city, Riyadh, Saudi Arabia. Method: The applied study design was an observational cross sectional design. A data collection sheet was designed and distributed to triage nurses; the data collection was done during triage process and was directly observed by the co-investigator. Triage system was reviewed by measuring three time intervals as quality indicators: time before triage (TBT), time before being seen by physician (TBP) and total length of stay (TLS) taking in consideration timing of presentation and level of triage. Results: During the study period, a total of 187 patients were included in our study. 118 visits were at weekdays and 68 visits at weekends. Overall, 173 patients (92.5%) were seen by the physician in timely manner according to triage guidelines while 14 patients (7.5%) were not seen at appropriate time.Overall, The mean time before seen the triage nurse (TBT) was 5.36 minutes, the mean time to be seen by physician (TBP) was 22.6 minutes and the mean length of stay (TLS) was 59 minutes. The data didn’t showed significant increase in TBT, TBP, and number of patients not seen at the proper time, referral rate and admission rate during weekend. Conclusion: The CTAS is adaptable to countries beyond Canada and worked properly. The applied CTAS triage system in Al-Yarmook UCC is considered to be effective and well applied. Overall, urgent cases have been seen by physician in timely manner according to triage system and there was no delay in the management of urgent cases.Keywords: CTAS, emergency, Saudi Arabia, triage, urgent care
Procedia PDF Downloads 32118792 AI Software Algorithms for Drivers Monitoring within Vehicles Traffic - SiaMOTO
Authors: Ioan Corneliu Salisteanu, Valentin Dogaru Ulieru, Mihaita Nicolae Ardeleanu, Alin Pohoata, Bogdan Salisteanu, Stefan Broscareanu
Abstract:
Creating a personalized statistic for an individual within the population using IT systems, based on the searches and intercepted spheres of interest they manifest, is just one 'atom' of the artificial intelligence analysis network. However, having the ability to generate statistics based on individual data intercepted from large demographic areas leads to reasoning like that issued by a human mind with global strategic ambitions. The DiaMOTO device is a technical sensory system that allows the interception of car events caused by a driver, positioning them in time and space. The device's connection to the vehicle allows the creation of a source of data whose analysis can create psychological, behavioural profiles of the drivers involved. The SiaMOTO system collects data from many vehicles equipped with DiaMOTO, driven by many different drivers with a unique fingerprint in their approach to driving. In this paper, we aimed to explain the software infrastructure of the SiaMOTO system, a system designed to monitor and improve driver driving behaviour, as well as the criteria and algorithms underlying the intelligent analysis process.Keywords: artificial intelligence, data processing, driver behaviour, driver monitoring, SiaMOTO
Procedia PDF Downloads 9118791 The Experimental Study on Reducing and Carbonizing Titanium-Containing Slag by Iron-Containing Coke
Authors: Yadong Liu
Abstract:
The experimental study on reduction carbonization of coke containing iron respectively with the particle size of <0.3mm, 0.3-0.6mm and 0.6-0.9mm and synthetic sea sand ore smelting reduction titanium-bearing slag as material were studied under the conditions of holding 6h at most at 1500℃. The effects of coke containing iron particle size and heat preservation time on the formation of TiC and the size of TiC crystal were studied by XRD, SEM and EDS. The results show that it is not good for the formation, concentration and growth of TiC crystal when the particle size of coke containing iron is too small or too large. The suitable particle size is 0.3~0.6mm. The heat preservation time of 2h basically ensures that all the component TiO2 in the slag are reduced and carbonized and converted to TiC. The size of TiC crystal will increase with the prolongation of heat preservation time. The thickness of the TiC layer can reach 20μm when the heat preservation time is 6h.Keywords: coke containing iron, formation and concentration and growth of TiC, reduction and carbonization, titanium-bearing slag
Procedia PDF Downloads 14918790 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 11418789 Tracing the Evolution of English and Urdu Languages: A Linguistic and Cultural Analysis
Authors: Aamna Zafar
Abstract:
Through linguistic and cultural analysis, this study seeks to trace the development of the English and Urdu languages. Along with examining how the vocabulary and syntax of English and Urdu have evolved over time and the linguistic trends that may be seen in these changes, this study will also look at the historical and cultural influences that have shaped the languages throughout time. The study will also look at how English and Urdu have changed over time, both in terms of language use and communication inside each other's cultures and globally. We'll research how these changes affect social relations and cultural identity, as well as how they might affect the future of these languages.Keywords: linguistic and cultural analysis, historical factors, cultural factors, vocabulary, syntax, significance
Procedia PDF Downloads 7518788 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9
Authors: Ulrich Wake, Eniman Syamsuddin
Abstract:
The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weightsKeywords: One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation
Procedia PDF Downloads 20818787 Resilience of Infrastructure Networks: Maintenance of Bridges in Mountainous Environments
Authors: Lorenza Abbracciavento, Valerio De Biagi
Abstract:
Infrastructures are key elements to ensure the operational functionality of the transport system. The collapse of a single bridge or, equivalently, a tunnel can leads an entire motorway to be considered completely inaccessible. As a consequence, the paralysis of the communications network determines several important drawbacks for the community. Recent chronicle events have demonstrated that ensuring the functional continuity of the strategic infrastructures during and after a catastrophic event makes a significant difference in terms of life and economical losses. Moreover, it has been observed that RC structures located in mountain environments show a worst state of conservation compared to the same typology and aging structures located in temperate climates. Because of its morphology, in fact, the mountain environment is particularly exposed to severe collapse and deterioration phenomena, generally: natural hazards, e.g. rock falls, and meteorological hazards, e.g. freeze-thaw cycles or heavy snows. For these reasons, deep investigation on the characteristics of these processes becomes of fundamental importance to provide smart and sustainable solutions and make the infrastructure system more resilient. In this paper, the design of a monitoring system in mountainous environments is presented and analyzed in its parts. The method not only takes into account the peculiar climatic conditions, but it is integrated and interacts with the environment surrounding.Keywords: structural health monitoring, resilience of bridges, mountain infrastructures, infrastructural network, maintenance
Procedia PDF Downloads 7718786 Translation Quality Assessment in Fansubbed English-Chinese Swearwords: A Corpus-Based Study of the Big Bang Theory
Authors: Qihang Jiang
Abstract:
Fansubbing, the combination of fan and subtitling, is one of the main branches of Audiovisual Translation (AVT) having kindled more and more interest of researchers into the AVT field in recent decades. In particular, the quality of so-called non-professional translation seems questionable due to the non-transparent qualification of subtitlers in a huge community network. This paper attempts to figure out how YYeTs aka 'ZiMuZu', the largest fansubbing group in China, translates swearwords from English to Chinese for its fans of the prevalent American sitcom The Big Bang Theory, taking cultural, social and political elements into account in the context of China. By building a bilingual corpus containing both the source and target texts, this paper found that most of the original swearwords were translated in a toned-down manner, probably due to Chinese audiences’ cultural and social network features as well as the strict censorship under the Chinese government. Additionally, House (2015)’s newly revised model of Translation Quality Assessment (TQA) was applied and examined. Results revealed that most of the subtitled swearwords achieved their pragmatic functions and exerted a communicative effect for audiences. In conclusion, this paper enriches the empirical research concerning House’s new TQA model, gives a full picture of the subtitling of swearwords in AVT field and provides a practical guide for the practitioners in their career of subtitling.Keywords: corpus-based approach, fansubbing, pragmatic functions, swearwords, translation quality assessment
Procedia PDF Downloads 14318785 Travel Time Estimation of Public Transport Networks Based on Commercial Incidence Areas in Quito Historic Center
Authors: M. Fernanda Salgado, Alfonso Tierra, David S. Sandoval, Wilbert G. Aguilar
Abstract:
Public transportation buses usually vary the speed depending on the places with the number of passengers. They require having efficient travel planning, a plan that will help them choose the fast route. Initially, an estimation tool is necessary to determine the travel time of each route, clearly establishing the possibilities. In this work, we give a practical solution that makes use of a concept that defines as areas of commercial incidence. These areas are based on the hypothesis that in the commercial places there is a greater flow of people and therefore the buses remain more time in the stops. The areas have one or more segments of routes, which have an incidence factor that allows to estimate the times. In addition, initial results are presented that verify the hypotheses and that promise adequately the travel times. In a future work, we take this approach to make an efficient travel planning system.Keywords: commercial incidence, planning, public transport, speed travel, travel time
Procedia PDF Downloads 25218784 Donoho-Stark’s and Hardy’s Uncertainty Principles for the Short-Time Quaternion Offset Linear Canonical Transform
Authors: Mohammad Younus Bhat
Abstract:
The quaternion offset linear canonical transform (QOLCT), which isa time-shifted and frequency-modulated version of the quaternion linear canonical transform (QLCT), provides a more general framework of most existing signal processing tools. For the generalized QOLCT, the classical Heisenberg’s and Lieb’s uncertainty principles have been studied recently. In this paper, we first define the short-time quaternion offset linear canonical transform (ST-QOLCT) and drive its relationship with the quaternion Fourier transform (QFT). The crux of the paper lies in the generalization of several well-known uncertainty principles for the ST-QOLCT, including Donoho-Stark’s uncertainty principle, Hardy’s uncertainty principle, Beurling’s uncertainty principle, and the logarithmic uncertainty principle.Keywords: Quaternion Fourier transform, Quaternion offset linear canonical transform, short-time quaternion offset linear canonical transform, uncertainty principle
Procedia PDF Downloads 21118783 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 19818782 Education and Learning in Indonesia to Refer to the Democratic and Humanistic Learning System in Finland
Authors: Nur Sofi Hidayah, Ratih Tri Purwatiningsih
Abstract:
Learning is a process attempts person to obtain a new behavior changes as a whole, as a result of his own experience in the interaction with the environment. Learning involves our brain to think, while the ability of the brain to each student's performance is different. To obtain optimal learning results then need time to learn the exact hour that the brain's performance is not too heavy. Referring to the learning system in Finland which apply 45 minutes to learn and a 15-minute break is expected to be the brain work better, with the rest of the brain, the brain will be more focused and lessons can be absorbed well. It can be concluded that learning in this way students learn with brain always fresh and the best possible use of the time, but it can make students not saturated in a lesson.Keywords: learning, working hours brain, time efficient learning, working hours in the brain receive stimulus.
Procedia PDF Downloads 39818781 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures
Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara
Abstract:
The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.Keywords: IoT, fog computing, task offloading, efficient crow search algorithm
Procedia PDF Downloads 5818780 Transnational Educators in Japan, Russia, and America: Historical Trends in Global Education in the 1990’s and Early 2000’s
Authors: Peter J. Glinos
Abstract:
The Alternative Education Resource Organization (AERO), one of the largest international hubs for alternative educators led by Jerry Mintz, has had a major impact on the global alternative education movement. The organization’s publications, like the AERO-Gramme Newsletter and its successor, the Education Revolution Magazine, allowed members across the globe to discuss issues, share support, and submit writings on policies and reforms. Stored on AERO's online digital archive, this work uses these publications from 1989 to 2011 to investigate the network's entanglements with America, Canada, Russia, Ukraine, Israel, Palestine, Japan, India, and Guatemala. Inspired by Reinhart Koselleck, this historical analysis will trace AERO’s entanglements within the United States, Japan, and Russia, contextualizing each of these multiple temporalities within the history of each nation’s education system, the developments within AERO, and the global geo-political climate at the time of AERO’s expansion. To help remedy the lack of attention paid by global historians to the role state organizations play supporting global networks, as noted in What is Global History? by Sebastian Conrad, this work will focus on the relationship between AERO and state actors.Keywords: global history, history of education, neoliberalism, transnational history, alternative education
Procedia PDF Downloads 2818779 Combat Capability Improvement Using Sleep Analysis
Authors: Gabriela Kloudova, Miloslav Stehlik, Peter Sos
Abstract:
The quality of sleep can affect combat performance where the vigilance, accuracy and reaction time are a decisive factor. In the present study, airborne and special units are measured on duty using actigraphy fingerprint scoring algorithm and QEEG (quantitative EEG). Actigraphic variables of interest will be: mean nightly sleep duration, mean napping duration, mean 24-h sleep duration, mean sleep latency, mean sleep maintenance efficiency, mean sleep fragmentation index, mean sleep onset time, mean sleep offset time and mean midpoint time. In an attempt to determine the individual somnotype of each subject, the data like sleep pattern, chronotype (morning and evening lateness), biological need for sleep (daytime and anytime sleepability) and trototype (daytime and anytime wakeability) will be extracted. Subsequently, a series of recommendations will be included in the training plan based on daily routine, timing of the day and night activities, duration of sleep and the number of sleeping blocks in a defined time. The aim of these modifications in the training plan is to reduce day-time sleepiness, improve vigilance, attention, accuracy, speed of the conducted tasks and to optimize energy supplies. Regular improvement of the training supposed to have long-term neurobiological consequences including neuronal activity changes measured by QEEG. Subsequently, that should enhance cognitive functioning in subjects assessed by the digital cognitive test batteries and improve their overall performance.Keywords: sleep quality, combat performance, actigraph, somnotype
Procedia PDF Downloads 16818778 Review of the Legislative and Policy Issues in Promoting Infrastructure Development to Promote Automation in Telecom Industry
Authors: Marvin Ricardo Awarab
Abstract:
There has never been a greater need for telecom services. The Internet of Things (IoT), 5G networking, and edge computing are the driving forces behind this increased demand. The fierce demand offers communications service providers significant income opportunities. The telecom sector is centered on automation, and realizing a digital operation that functions as a real-time business will be crucial for the industry as a whole. Automation in telecom refers to the application of technology to create a more effective, quick, and scalable alternative to the conventional method of operating the telecom industry. With the promotion of 5G and the Internet of Things (IoT), telecom companies will continue to invest extensively in telecom automation technology. Automation offers benefits in the telecom industry; developing countries such as Namibia may not fully tap into such benefits because of the lack of funds and infrastructural resources to invest in automation. This paper fully investigates the benefits of automation in the telecom industry. Furthermore, the paper identifies hiccups that developing countries such as Namibia face in their quest to fully introduce automation in the telecom industry. Additionally, the paper proposes possible avenues that Namibia, as a developing country, adopt investing in automation infrastructural resources with the aim of reaping the full benefits of automation in the telecom industry.Keywords: automation, development, internet, internet of things, network, telecom, telecommunications policy, 5G
Procedia PDF Downloads 6318777 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 19518776 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 80