Search results for: MICS database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1614

Search results for: MICS database

744 Material Analysis for Temple Painting Conservation in Taiwan

Authors: Chen-Fu Wang, Lin-Ya Kung

Abstract:

For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.

Keywords: temple painting, painting material, conservation, FT-IR

Procedia PDF Downloads 168
743 A Longitudinal Study of Psychological Capital, Parent-Child Relationships, and Subjective Well-Beings in Economically Disadvantaged Adolescents

Authors: Chang Li-Yu

Abstract:

Purposes: The present research focuses on exploring the latent growth model of psychological capital in disadvantaged adolescents and assessing its relationship with subjective well-being. Methods: Longitudinal study design was utilized and the data was from Taiwan Database of Children and Youth in Poverty (TDCYP), using the student questionnaires from 2009, 2011, and 2013. Data analysis was conducted using both univariate and multivariate latent growth curve models. Results: This study finds that: (1) The initial state and growth rate of individual factors such as parent-child relationships, psychological capital, and subjective wellbeing in economically disadvantaged adolescents have a predictive impact; (2) There are positive interactive effects in the development among factors like parentchild relationships, psychological capital, and subjective well-being in economically disadvantaged adolescents; and (3) The initial state and growth rate of parent-child relationships and psychological capital in economically disadvantaged adolescents positively affect the initial state and growth rate of their subjective well-being. Recommendations: Based on these findings, this study concretely discusses the significance of psychological capital and family cohesion for the mental health of economically disadvantaged youth and offers suggestions for counseling, psychological therapy, and future research.

Keywords: economically disadvantaged adolescents, psychological capital, parent-child relationships, subjective well-beings

Procedia PDF Downloads 34
742 Simultaneous Targeting of MYD88 and Nur77 as an Effective Approach for the Treatment of Inflammatory Diseases

Authors: Uzma Saqib, Mirza S. Baig

Abstract:

Myeloid differentiation primary response protein 88 (MYD88) has long been considered a central player in the inflammatory pathway. Recent studies clearly suggest that it is an important therapeutic target in inflammation. On the other hand, a recent study on the interaction between the orphan nuclear receptor (Nur77) and p38α, leading to increased lipopolysaccharide-induced hyperinflammatory response, suggests this binary complex as a therapeutic target. In this study, we have designed inhibitors that can inhibit both MYD88 and Nur77 at the same time. Since both MYD88 and Nur77 are an integral part of the pathways involving lipopolysaccharide-induced activation of NF-κB-mediated inflammation, we tried to target both proteins with the same library in order to retrieve compounds having dual inhibitory properties. To perform this, we developed a homodimeric model of MYD88 and, along with the crystal structure of Nur77, screened a virtual library of compounds from the traditional Chinese medicine database containing ~61,000 compounds. We analyzed the resulting hits for their efficacy for dual binding and probed them for developing a common pharmacophore model that could be used as a prototype to screen compound libraries as well as to guide combinatorial library design to search for ideal dual-target inhibitors. Thus, our study explores the identification of novel leads having dual inhibiting effects due to binding to both MYD88 and Nur77 targets.

Keywords: drug design, Nur77, MYD88, inflammation

Procedia PDF Downloads 287
741 Multi-Source Data Fusion for Urban Comprehensive Management

Authors: Bolin Hua

Abstract:

In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.

Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data

Procedia PDF Downloads 366
740 Mining the Proteome of Fusobacterium nucleatum for Potential Therapeutics Discovery

Authors: Abdul Musaweer Habib, Habibul Hasan Mazumder, Saiful Islam, Sohel Sikder, Omar Faruk Sikder

Abstract:

The plethora of genome sequence information of bacteria in recent times has ushered in many novel strategies for antibacterial drug discovery and facilitated medical science to take up the challenge of the increasing resistance of pathogenic bacteria to current antibiotics. In this study, we adopted subtractive genomics approach to analyze the whole genome sequence of the Fusobacterium nucleatum, a human oral pathogen having association with colorectal cancer. Our study divulged 1499 proteins of Fusobacterium nucleatum, which has no homolog in human genome. These proteins were subjected to screening further by using the Database of Essential Genes (DEG) that resulted in the identification of 32 vitally important proteins for the bacterium. Subsequent analysis of the identified pivotal proteins, using the KEGG Automated Annotation Server (KAAS) resulted in sorting 3 key enzymes of F. nucleatum that may be good candidates as potential drug targets, since they are unique for the bacterium and absent in humans. In addition, we have demonstrated the 3-D structure of these three proteins. Finally, determination of ligand binding sites of the key proteins as well as screening for functional inhibitors that best fitted with the ligands sites were conducted to discover effective novel therapeutic compounds against Fusobacterium nucleatum.

Keywords: colorectal cancer, drug target, Fusobacterium nucleatum, homology modeling, ligands

Procedia PDF Downloads 365
739 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 324
738 The Nature and the Structure of Scientific and Innovative Collaboration Networks

Authors: Afshin Moazami, Andrea Schiffauerova

Abstract:

The objective of this work is to investigate the development and the role of collaboration networks in the creation of knowledge and innovations in the US and Canada, with a special focus on Quebec. In order to create scientific networks, the data on journal articles were extracted from SCOPUS, and the networks were built based on the co-authorship of the journal papers. For innovation networks, the USPTO database was used, and the networks were built on the patent co-inventorship. Various indicators characterizing the evolution of the network structure and the positions of the researchers and inventors in the networks were calculated. The comparison between the United States, Canada, and Quebec was then carried out. The preliminary results show that the nature of scientific collaboration networks differs from the one seen in innovation networks. Scientists work in bigger teams and are mostly interconnected within one giant network component, whereas the innovation network is much more clustered and fragmented, the inventors work more repetitively with the same partners, often in smaller isolated groups. In both Canada and the US, an increasing tendency towards collaboration was observed, and it was found that networks are getting bigger and more centralized with time. Moreover, a declining share of knowledge transfers per scientist was detected, suggesting an increasing specialization of science. The US collaboration networks tend to be more centralized than the Canadian ones. Quebec shares a lot of features with the Canadian network, but some differences were observed, for example, Quebec inventors rely more on the knowledge transmission through intermediaries.

Keywords: Canada, collaboration, innovation network, scientific network, Quebec, United States

Procedia PDF Downloads 180
737 An Adaptive Distributed Incremental Association Rule Mining System

Authors: Adewale O. Ogunde, Olusegun Folorunso, Adesina S. Sodiya

Abstract:

Most existing Distributed Association Rule Mining (DARM) systems are still facing several challenges. One of such challenges that have not received the attention of many researchers is the inability of existing systems to adapt to constantly changing databases and mining environments. In this work, an Adaptive Incremental Mining Algorithm (AIMA) is therefore proposed to address these problems. AIMA employed multiple mobile agents for the entire mining process. AIMA was designed to adapt to changes in the distributed databases by mining only the incremental database updates and using this to update the existing rules in order to improve the overall response time of the DARM system. In AIMA, global association rules were integrated incrementally from one data site to another through Results Integration Coordinating Agents. The mining agents in AIMA were made adaptive by defining mining goals with reasoning and behavioral capabilities and protocols that enabled them to either maintain or change their goals. AIMA employed Java Agent Development Environment Extension for designing the internal agents’ architecture. Results from experiments conducted on real datasets showed that the adaptive system, AIMA performed better than the non-adaptive systems with lower communication costs and higher task completion rates.

Keywords: adaptivity, data mining, distributed association rule mining, incremental mining, mobile agents

Procedia PDF Downloads 375
736 Regular or Irregular: An Investigation of Medicine Consumption Pattern with Poisson Mixture Model

Authors: Lichung Jen, Yi Chun Liu, Kuan-Wei Lee

Abstract:

Fruitful data has been accumulated in database nowadays and is commonly used as support for decision-making. In the healthcare industry, hospital, for instance, ordering pharmacy inventory is one of the key decision. With large drug inventory, the current cost increases and its expiration dates might lead to future issue, such as drug disposal and recycle. In contrast, underestimating demand of the pharmacy inventory, particularly standing drugs, affects the medical treatment and possibly hospital reputation. Prescription behaviour of hospital physicians is one of the critical factor influencing this decision, particularly irregular prescription behaviour. If a drug’s usage amount in the month is irregular and less than the regular usage, it may cause the trend of subsequent stockpiling. On the contrary, if a drug has been prescribed often than expected, it may result in insufficient inventory. We proposed a hierarchical Bayesian mixture model with two components to identify physicians’ regular/irregular prescription patterns with probabilities. Heterogeneity of hospital is considered in our proposed hierarchical Bayes model. The result suggested that modeling the prescription patterns of physician is beneficial for estimating the order quantity of medication and pharmacy inventory management of the hospital. Managerial implication and future research are discussed.

Keywords: hierarchical Bayesian model, poission mixture model, medicines prescription behavior, irregular behavior

Procedia PDF Downloads 113
735 Bitcoin, Blockchain and Smart Contract: Attacks and Mitigations

Authors: Mohamed Rasslan, Doaa Abdelrahman, Mahmoud M. Nasreldin, Ghada Farouk, Heba K. Aslan

Abstract:

Blockchain is a distributed database that endorses transparency while bitcoin is a decentralized cryptocurrency (electronic cash) that endorses anonymity and is powered by blockchain technology. Smart contracts are programs that are stored on a blockchain. Smart contracts are executed when predetermined conditions are fulfilled. Smart contracts automate the agreement execution in order to make sure that all participants immediate-synchronism of the outcome-certainty, without any intermediary's involvement or time loss. Currently, the Bitcoin market worth billions of dollars. Bitcoin could be transferred from one purchaser to another without the need for an intermediary bank. Network nodes through cryptography verify bitcoin transactions, which are registered in a public-book called “blockchain”. Bitcoin could be replaced by other coins, merchandise, and services. Rapid growing of the bitcoin market-value, encourages its counterparts to make use of its weaknesses and exploit vulnerabilities for profit. Moreover, it motivates scientists to define known vulnerabilities, offer countermeasures, and predict future threats. In his paper, we study blockchain technology and bitcoin from the attacker’s point of view. Furthermore, mitigations for the attacks are suggested, and contemporary security solutions are discussed. Finally, research methods that achieve strict security and privacy protocol are elaborated.

Keywords: Cryptocurrencies, Blockchain, Bitcoin, Smart Contracts, Peer-to-Peer Network, Security Issues, Privacy Techniques

Procedia PDF Downloads 61
734 A Comparative Study of Global Power Grids and Global Fossil Energy Pipelines Using GIS Technology

Authors: Wenhao Wang, Xinzhi Xu, Limin Feng, Wei Cong

Abstract:

This paper comprehensively investigates current development status of global power grids and fossil energy pipelines (oil and natural gas), proposes a standard visual platform of global power and fossil energy based on Geographic Information System (GIS) technology. In this visual platform, a series of systematic visual models is proposed with global spatial data, systematic energy and power parameters. Under this visual platform, the current Global Power Grids Map and Global Fossil Energy Pipelines Map are plotted within more than 140 countries and regions across the world. Using the multi-scale fusion data processing and modeling methods, the world’s global fossil energy pipelines and power grids information system basic database is established, which provides important data supporting global fossil energy and electricity research. Finally, through the systematic and comparative study of global fossil energy pipelines and global power grids, the general status of global fossil energy and electricity development are reviewed, and energy transition in key areas are evaluated and analyzed. Through the comparison analysis of fossil energy and clean energy, the direction of relevant research is pointed out for clean development and energy transition.

Keywords: energy transition, geographic information system, fossil energy, power systems

Procedia PDF Downloads 129
733 Bridging the Gap between Obstetric and Colorectal Services after Obstetric Anal Sphincter Injuries

Authors: Shachi Joshi

Abstract:

Purpose: The primary aim of this study was to determine the prevalence of pelvic dysfunction symptoms following OASI. The secondary aim was to assess the scope of a dedicated perineal trauma clinic in identifying and investigating women that have experienced faecal incontinence after OASI and if a transitional clinic arrangement to colorectal surgeons would be useful. Methods: The clinical database was used to identify and obtain information about 118 women who sustained an OASI (3rd/ 4th degree tear) between August 2016 and July 2017. A questionnaire was designed to assess symptoms of pelvic dysfunction; this was sent via the post in November 2018. Results: The questionnaire was completed by 45 women (38%). Faecal incontinence was experienced by 42% (N=19), flatus incontinence by 47% (N=21), urinary incontinence by 76% (N=34), dyspareunia by 49% (N=22) and pelvic pain by 33% (N=15). Of the questionnaire respondents, only 62% (N=28) had attended a perineal trauma clinic appointment. 46% (N=13) of these women reported having experienced difficulty controlling flatus or faeces in the questionnaire, however, only 23% (N=3) of these reported ongoing symptoms at the time of clinic attendance and underwent an endoanal ultrasound scan. Conclusion: Pelvic dysfunction symptoms are highly prevalent following an OASI. Perineal trauma clinic attendance alone is not sufficient for identification and follow up of symptoms. Transitional care is needed between obstetric and colorectal teams, to recognize and treat women with ongoing faecal incontinence.

Keywords: incontinence, obstetric anal sphincter, injury, repair

Procedia PDF Downloads 93
732 Distribution of HLA-DQA1 and HLA-DQB1 Alleles in Thais: Genetics Database Insight for COVID-19 Severity

Authors: Jinu Phonamontham

Abstract:

Coronavirus, also referred to as COVID-19, is a virus caused by the SARS-Cov-2 virus. The pandemic has caused over 10 million cases and 500,000 deaths worldwide through the end of June 2020. In a previous study, HLA-DQA1*01:02 allele was associated with COVID-19 disease (p-value = 0.0121). Furthermore, there was a statistical significance between HLA- DQB1*06:02 and COVID-19 in the Italian population by Bonferroni’s correction (p-value = 0.0016). Nevertheless, there is no data describing the distribution of HLA alleles as a valid marker for prediction of COVID-19 in the Thai population. We want to investigate the prevalence of HLA-DQA1*01:02 and HLA-DQB1*06:02 alleles that are associated with severe COVID-19 in the Thai population. In this study, we recruited 200 healthy Thai individuals. Genomic DNA samples were isolated from EDTA blood using Genomic DNA Mini Kit. HLA genotyping was conducted using the Lifecodes HLA SSO typing kits (Immucor, West Avenue, Stamford, USA). The frequency of HLA-DQA1 alleles in Thai population, consisting of HLA-DQA1*01:01 (27.75%), HLA-DQA1*01:02 (24.50%), HLA-DQA1*03:03 (13.00%), HLA-DQA1*06:01 (10.25%) and HLA-DQA1*02:01 (6.75%). Furthermore, the distributions of HLA-DQB1 alleles were HLA-DQB1*05:02 (21.50%), HLA-DQB1*03:01 (15.75%), HLA-DQB1*05:01 (14.50%), HLA-DQB1*03:03 (11.00%) and HLA-DQB1*02:02 (8.25%). Particularly, HLA- DQA1*01:02 (29.00%) allele was the highest frequency in the NorthEast group, but there was not significant difference when compared with the other regions in Thais (p-value = 0.4202). HLA-DQB1*06:02 allele was similarly distributed in Thai population and there was no significant difference between Thais and China (3.8%) and South Korea (6.4%) and Japan (8.2%) with p-value > 0.05. Whereas, South Africa (15.7%) has a significance with Thais by p-value of 0.0013. This study supports the specific genotyping of the HLA-DQA1*01:02 and HLA-DQB1*06:02 alleles to screen severe COVID-19 in Thai and many populations.

Keywords: HLA-DQA1*01:02, HLA-DQB1*06:02, Asian, Thai population

Procedia PDF Downloads 75
731 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms

Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna

Abstract:

In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.

Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove

Procedia PDF Downloads 273
730 Serological and Molecular Detection of Alfalfa Mosaic Virus in the Major Potato Growing Areas of Saudi Arabia

Authors: Khalid Alhudaib

Abstract:

Potato is considered as one of the most important and potential vegetable crops in Saudi Arabia. Alfalfa mosaic virus (AMV), genus Alfamovirus, family Bromoviridae is among the broad spread of viruses in potato. During spring and fall growing seasons of potato in 2015 and 2016, several field visits were conducted in the four major growing areas of potato cultivation (Riyadh-Qaseem-Hail-Hard). The presence of AMV was detected in samples using ELISA, dot blot hybridization and/or RT-PCR. The highest occurrence of AMV was observed as 18.6% in Qaseem followed by Riyadh with 15.2% while; the lowest infection rates were recorded in Hard and Hail, 8.3 and 10.4%, respectively. The sequences of seven isolates of AMV obtained in this study were determined and the sequences were aligned with the other sequences available in the GenBank database. Analyses confirmed the low variability among AMV isolated in this study, which means that all AMV isolates may originate from the same source. Due to high incidence of AMV, other economic susceptible crops may become affected by high incidence of this virus in potato crops. This requires accurate examination of potato seed tubers to prevent the spread of the virus in Saudi Arabia. The obtained results indicated that the hybridization and ELISA are suitable techniques in the routine detection of AMV in a large number of samples while RT-PCR is more sensitive and essential for molecular characterization of AMV.

Keywords: Alfamovirus, AMV, Alfalfa mosaic virus, PCR, potato

Procedia PDF Downloads 149
729 Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography

Authors: P. S. Jagadeesh Kumar, Yang Yung, Wenli Hu

Abstract:

Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.

Keywords: speech emotion classification, tensor deep stacking neural networks, facial electromyography, bilinear mapping, audio-visual stimuli

Procedia PDF Downloads 229
728 Seismic Behaviour of Bi-Symmetric Buildings

Authors: Yogendra Singh, Mayur Pisode

Abstract:

Many times it is observed that in multi-storeyed buildings the dynamic properties in the two directions are similar due to which there may be a coupling between the two orthogonal modes of the building. This is particularly observed in bi-symmetric buildings (buildings with structural properties and periods approximately equal in the two directions). There is a swapping of vibrational energy between the modes in the two orthogonal directions. To avoid this coupling the draft revision of IS:1893 proposes a minimum separation of more than 15% between the frequencies of the fundamental modes in the two directions. This study explores the seismic behaviour of bi-symmetrical buildings under uniaxial and bi-axial ground motions. For this purpose, three different types of 8 storey buildings symmetric in plan are modelled. The first building has square columns, resulting in identical periods in the two directions. The second building, with rectangular columns, has a difference of 20% in periods in orthogonal directions, and the third building has half of the rectangular columns aligned in one direction and other half aligned in the other direction. The numerical analysis of the seismic response of these three buildings is performed by using a set of 22 ground motions from PEER NGA database and scaled as per FEMA P695 guidelines to represent the same level of intensity corresponding to the Design Basis Earthquake. The results are analyzed in terms of the displacement-time response of the buildings at roof level and corresponding maximum inter-storey drift ratios.

Keywords: bi-symmetric buildings, design code, dynamic coupling, multi-storey buildings, seismic response

Procedia PDF Downloads 227
727 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: seismic slope stability, mainshock, aftershock, landslide, earthquake, flexible slopes

Procedia PDF Downloads 129
726 Compliance with the Health and Safety Standards/Regulations in the South African Mining Industry: A Literature Review

Authors: Livhuwani Muthelo, Tebogo Maria Mothiba, Rambelani Nancy Malema

Abstract:

Background: Despite occupational legislation/standards being in place in the industry, there are many reported health and safety incidents, including both occupational injuries and illnesses in the South African mining industry. Purpose: This systematic literature review aimed to describe and identify the existing gaps in health and safety compliance within the South African mining industry and propose future research areas. Methodology: A systematic literature review was conducted using the key concepts of health and safety, compliance, standards, and mining. A total of 102 papers issued from 1994 to April 2020 were extracted from an online database search, which included a combination of South African and international government OHS legislation documents, policies, standards, reports from the mineral departments and international labour office, qualitative and quantitative journal articles, dissertations, seminars and conference proceedings. Results: The literature review revealed that, though there are laws, regulations, standards to guide the industry on health and safety issues in South Africa, the main challenge is with the compliance with the existing health and safety systems, wherein systems are not being implemented. Conclusion: Gaps between research, policy, and implementation in occupational health practice in the South African mining industry were also identified.

Keywords: circumstances, non-compliance, health and safety, standards, mining industry

Procedia PDF Downloads 248
725 Supplier Carbon Footprint Methodology Development for Automotive Original Equipment Manufacturers

Authors: Nur A. Özdemir, Sude Erkin, Hatice K. Güney, Cemre S. Atılgan, Enes Huylu, Hüseyin Y. Altıntaş, Aysemin Top, Özak Durmuş

Abstract:

Carbon emissions produced during a product’s life cycle, from extraction of raw materials up to waste disposal and market consumption activities are the major contributors to global warming. In the light of the science-based targets (SBT) leading the way to a zero-carbon economy for sustainable growth of the companies, carbon footprint reporting of the purchased goods has become critical for identifying hotspots and best practices for emission reduction opportunities. In line with Ford Otosan's corporate sustainability strategy, research was conducted to evaluate the carbon footprint of purchased products in accordance with Scope 3 of the Greenhouse Gas Protocol (GHG). The purpose of this paper is to develop a systematic and transparent methodology to calculate carbon footprint of the products produced by automotive OEMs (Original Equipment Manufacturers) within the context of automobile supply chain management. To begin with, primary material data were collected through IMDS (International Material Database System) corresponds to company’s three distinct types of vehicles including Light Commercial Vehicle (Courier), Medium Commercial Vehicle (Transit and Transit Custom), Heavy Commercial Vehicle (F-MAX). Obtained material data was classified as metals, plastics, liquids, electronics, and others to get insights about the overall material distribution of produced vehicles and matched to the SimaPro Ecoinvent 3 database which is one of the most extent versions for modelling material data related to the product life cycle. Product life cycle analysis was calculated within the framework of ISO 14040 – 14044 standards by addressing the requirements and procedures. A comprehensive literature review and cooperation with suppliers were undertaken to identify the production methods of parts used in vehicles and to find out the amount of scrap generated during part production. Cumulative weight and material information with related production process belonging the components were listed by multiplying with current sales figures. The results of the study show a key modelling on carbon footprint of products and processes based on a scientific approach to drive sustainable growth by setting straightforward, science-based emission reduction targets. Hence, this study targets to identify the hotspots and correspondingly provide broad ideas about our understanding of how to integrate carbon footprint estimates into our company's supply chain management by defining convenient actions in line with climate science. According to emission values arising from the production phase including raw material extraction and material processing for Ford OTOSAN vehicles subjected in this study, GHG emissions from the production of metals used for HCV, MCV and LCV account for more than half of the carbon footprint of the vehicle's production. Correspondingly, aluminum and steel have the largest share among all material types and achieving carbon neutrality in the steel and aluminum industry is of great significance to the world, which will also present an immense impact on the automobile industry. Strategic product sustainability plan which includes the use of secondary materials, conversion to green energy and low-energy process design is required to reduce emissions of steel, aluminum, and plastics due to the projected increase in total volume by 2030.

Keywords: automotive, carbon footprint, IMDS, scope 3, SimaPro, sustainability

Procedia PDF Downloads 89
724 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.).

Keywords: motion detection, motion tracking, trajectory analysis, video surveillance

Procedia PDF Downloads 524
723 Three Tier Indoor Localization System for Digital Forensics

Authors: Dennis L. Owuor, Okuthe P. Kogeda, Johnson I. Agbinya

Abstract:

Mobile localization has attracted a great deal of attention recently due to the introduction of wireless networks. Although several localization algorithms and systems have been implemented and discussed in the literature, very few researchers have exploited the gap that exists between indoor localization, tracking, external storage of location information and outdoor localization for the purpose of digital forensics during and after a disaster. The contribution of this paper lies in the implementation of a robust system that is capable of locating, tracking mobile device users and store location information for both indoor and partially outdoor the cloud. The system can be used during disaster to track and locate mobile phone users. The developed system is a mobile application built based on Android, Hypertext Preprocessor (PHP), Cascading Style Sheets (CSS), JavaScript and MATLAB for the Android mobile users. Using Waterfall model of software development, we have implemented a three level system that is able to track, locate and store mobile device information in secure database (cloud) on almost a real time basis. The outcome of the study showed that the developed system is efficient with regard to the tracking and locating mobile devices. The system is also flexible, i.e. can be used in any building with fewer adjustments. Finally, the system is accurate for both indoor and outdoor in terms of locating and tracking mobile devices.

Keywords: indoor localization, digital forensics, fingerprinting, tracking and cloud

Procedia PDF Downloads 313
722 Simulation IDM for Schedule Generation of Slip-Form Operations

Authors: Hesham A. Khalek, Shafik S. Khoury, Remon F. Aziz, Mohamed A. Hakam

Abstract:

Slipforming operation’s linearity is a source of planning complications, and operation is usually subjected to bottlenecks at any point, so careful planning is required in order to achieve success. On the other hand, Discrete-event simulation concepts can be applied to simulate and analyze construction operations and to efficiently support construction scheduling. Nevertheless, preparation of input data for construction simulation is very challenging, time-consuming and human prone-error source. Therefore, to enhance the benefits of using DES in construction scheduling, this study proposes an integrated module to establish a framework for automating the generation of time schedules and decision support for Slipform construction projects, particularly through the project feasibility study phase by using data exchange between project data stored in an Intermediate database, DES and Scheduling software. Using the stored information, proposed system creates construction tasks attribute [e.g. activities durations, material quantities and resources amount], then DES uses all the given information to create a proposal for the construction schedule automatically. This research is considered a demonstration of a flexible Slipform project modeling, rapid scenario-based planning and schedule generation approach that may be of interest to both practitioners and researchers.

Keywords: discrete-event simulation, modeling, construction planning, data exchange, scheduling generation, EZstrobe

Procedia PDF Downloads 358
721 Correlation between Speech Emotion Recognition Deep Learning Models and Noises

Authors: Leah Lee

Abstract:

This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.

Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16

Procedia PDF Downloads 55
720 Classification of Multiple Cancer Types with Deep Convolutional Neural Network

Authors: Nan Deng, Zhenqiu Liu

Abstract:

Thousands of patients with metastatic tumors were diagnosed with cancers of unknown primary sites each year. The inability to identify the primary cancer site may lead to inappropriate treatment and unexpected prognosis. Nowadays, a large amount of genomics and transcriptomics cancer data has been generated by next-generation sequencing (NGS) technologies, and The Cancer Genome Atlas (TCGA) database has accrued thousands of human cancer tumors and healthy controls, which provides an abundance of resource to differentiate cancer types. Meanwhile, deep convolutional neural networks (CNNs) have shown high accuracy on classification among a large number of image object categories. Here, we utilize 25 cancer primary tumors and 3 normal tissues from TCGA and convert their RNA-Seq gene expression profiling to color images; train, validate and test a CNN classifier directly from these images. The performance result shows that our CNN classifier can archive >80% test accuracy on most of the tumors and normal tissues. Since the gene expression pattern of distant metastases is similar to their primary tumors, the CNN classifier may provide a potential computational strategy on identifying the unknown primary origin of metastatic cancer in order to plan appropriate treatment for patients.

Keywords: bioinformatics, cancer, convolutional neural network, deep leaning, gene expression pattern

Procedia PDF Downloads 283
719 Genome-Wide Functional Analysis of Phosphatase in Cryptococcus neoformans

Authors: Jae-Hyung Jin, Kyung-Tae Lee, Yee-Seul So, Eunji Jeong, Yeonseon Lee, Dongpil Lee, Dong-Gi Lee, Yong-Sun Bahn

Abstract:

Cryptococcus neoformans causes cryptococcal meningoencephalitis mainly in immunocompromised patients as well as immunocompetent people. But therapeutic options are limited to treat cryptococcosis. Some signaling pathways including cyclic AMP pathway, MAPK pathway, and calcineurin pathway play a central role in the regulation of the growth, differentiation, and virulence of C. neoformans. To understand signaling networks regulating the virulence of C. neoformans, we selected the 114 putative phosphatase genes, one of the major components of signaling networks, in the genome of C. neoformans. We identified putative phosphatases based on annotation in C. neoformans var. grubii genome database provided by the Broad Institute and National Center for Biotechnology Information (NCBI) and performed a BLAST search of phosphatases of Saccharomyces cerevisiae, Aspergillus nidulans, Candida albicans and Fusarium graminearum to Cryptococcus neoformans. We classified putative phosphatases into 14 groups based on InterPro phosphatase domain annotation. Here, we constructed 170 signature-tagged gene-deletion strains through homologous recombination methods for 91 putative phosphatases. We examined their phenotypic traits under 30 different in vitro conditions, including growth, differentiation, stress response, antifungal resistance and virulence-factor production.

Keywords: human fungal pathogen, phosphatase, deletion library, functional genomics

Procedia PDF Downloads 342
718 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning

Authors: Pooja Khanal, Huaming Zhang

Abstract:

Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.

Keywords: bug classification, bug labels, GitHub issues, semantic differences

Procedia PDF Downloads 179
717 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights

Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan

Abstract:

The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyze huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic well being is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that supports the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.

Keywords: big data, COVID-19, health, indexing, NoSQL, sharding, scalability, well being

Procedia PDF Downloads 52
716 Changing Trends in the Use of Induction Agents for General Anesthesia for Cesarean Section

Authors: Mahmoud Hassanin, Amita Gupta

Abstract:

Background: During current practice, Thiopentone is not cost-effectively added to resources wastage, risk of drug error with antibiotics, short shelf life, infection risk, and risk of delay while preparing during category one cesarean section. There is no significant difference or preference to the other alternative as per current use. Aims and Objectives: Patient safety, Cost-effective use of trust resources, problem awareness, Consider improvising on the current practice. Methods: In conjunction with the local department survey results, many studies support the change. Results: More than 50%(15 from 29) are already using Propofol, more than 75% of the participant are willing to shift to Propofol if it becomes standard, and the cost analysis also revealed that Thiopentone 10 X500=£60 Propofol 10X200= £5.20, Cost of Thiopentone/year =£2190. Approximately GA in a year =35-40 could cost approximately £20 Propofol, given it is a well-established practice. We could save not only money, but it will be environmentally friendly also to avoid adding any carbon footprints. Recommendation: Thiopentone is rarely used as an induction agent for the category one Caesarean section in our obstetric emergency theatres. Most obstetric anesthetists are using Propofol. Keep both Propofol and thiopentone(powder not withdrawn) in the cat one cesarean section emergency drugs tray ready until the department completely changes the practice protocol. A further retrospective study is required to compare the outcomes for these induction agents through the local database.

Keywords: thiopentone, propofol, category 1 caesarean, induction agents

Procedia PDF Downloads 121
715 Application of Bim Model Data to Estimate ROI for Robots and Automation in Construction Projects

Authors: Brian Romansky

Abstract:

There are many practical, commercially available robots and semi-autonomous systems that are currently available for use in a wide variety of construction tasks. Adoption of these technologies has the potential to reduce the time and cost to deliver a project, reduce variability and risk in delivery time, increase quality, and improve safety on the job site. These benefits come with a cost for equipment rental or contract fees, access to specialists to configure the system, and time needed for set-up and support of the machines while in use. Calculation of the net ROI (Return on Investment) requires detailed information about the geometry of the site, the volume of work to be done, the overall project schedule, as well as data on the capabilities and past performance of available robotic systems. Assembling the required data and comparing the ROI for several options is complex and tedious. Many project managers will only consider the use of a robot in targeted applications where the benefits are obvious, resulting in low levels of adoption of automation in the construction industry. This work demonstrates how data already resident in many BIM (Building Information Model) projects can be used to automate ROI estimation for a sample set of commercially available construction robots. Calculations account for set-up and operating time along with scheduling support tasks required while the automated technology is in use. Configuration parameters allow for prioritization of time, cost, or safety as the primary benefit of the technology. A path toward integration and use of automatic ROI calculation with a database of available robots in a BIM platform is described.

Keywords: automation, BIM, robot, ROI.

Procedia PDF Downloads 71