Search results for: analysis and real time information about liquefaction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 46815

Search results for: analysis and real time information about liquefaction

44835 Study on Disaster Prevention Plan for an Electronic Industry in Thailand

Authors: S. Pullteap, M. Pathomsuriyaporn

Abstract:

In this article, a study of employee’s opinion to the factors that affect to the flood preventive and the corrective action plan in an electronic industry at the Sharp Manufacturing (Thailand) Co., Ltd. has been investigated. The surveys data of 175 workers and supervisors have, however, been selected for data analysis. The results is shown that the employees emphasize about the needs in a subsidy at the time of disaster at high levels of 77.8%, as the plan focusing on flood prevention of the rehabilitation equipment is valued at the intermediate level, which is 79.8%. Demonstration of the hypothesis has found that the different education levels has thus been affected to the needs factor at the flood disaster time. Moreover, most respondents give priority to flood disaster risk management factor. Consequently, we found that the flood prevention plan is valued at high level, especially on information monitoring, which is 93.4% for the supervisor item. The respondents largely assume that the flood will have impacts on the industry, up to 80%, thus to focus on flood management plans is enormous.

Keywords: flood prevention plan, flood event, electronic industrial plant, disaster, risk management

Procedia PDF Downloads 327
44834 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 116
44833 Curbing Cybercrime by Application of Internet Users’ Identification System (IUIS) in Nigeria

Authors: K. Alese Boniface, K. Adu Michael

Abstract:

Cybercrime is now becoming a big challenge in Nigeria apart from the traditional crime. Inability to identify perpetrators is one of the reasons for the growing menace. This paper proposes a design for monitoring internet users’ activities in order to curbing cybercrime. It requires redefining the operations of Internet Service Providers (ISPs) which will now mandate users to be authenticated before accessing the internet. In implementing this work which can be adapted to a larger scale, a virtual router application is developed and configured to mimic a real router device. A sign-up portal is developed to allow users to register with the ISP. The portal asks for identification information which will include bio-data and government issued identification data like National Identity Card number, et cetera. A unique username and password are chosen by the user to enable access to the internet which will be used to reference him to an Internet Protocol Address (IP Address) of any system he uses on the internet and thereby associating him to any criminal act related to that IP address at that particular time. Questions such as “What happen when another user knows the password and uses it to commit crime?” and other pertinent issues are addressed.

Keywords: cybercrime, sign-up portal, internet service provider (ISP), internet protocol address (IP address)

Procedia PDF Downloads 279
44832 Condition Based Assessment of Power Transformer with Modern Techniques

Authors: Piush Verma, Y. R. Sood

Abstract:

This paper provides the information on the diagnostics techniques for condition monitoring of power transformer (PT). This paper deals with the practical importance of the transformer diagnostic in the Electrical Engineering field. The life of the transformer depends upon its insulation i.e paper and oil. The major testing techniques applies on transformer oil and paper i.e dissolved gas analysis, furfural analysis, radio interface, acoustic emission, infra-red emission, frequency response analysis, power factor, polarization spectrum, magnetizing currents, turn and winding ratio. A review has been made on the modern development of this practical technology.

Keywords: temperature, condition monitoring, diagnostics methods, paper analysis techniques, oil analysis techniques

Procedia PDF Downloads 433
44831 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 517
44830 A Probability Analysis of Construction Project Schedule Using Risk Management Tool

Authors: A. L. Agarwal, D. A. Mahajan

Abstract:

Construction industry tumbled along with other industry/sectors during recent economic crash. Construction business could not regain thereafter and still pass through slowdown phase, resulted many real estate as well as infrastructure projects not completed on schedule and within budget. There are many theories, tools, techniques with software packages available in the market to analyze construction schedule. This study focuses on the construction project schedule and uncertainties associated with construction activities. The infrastructure construction project has been considered for the analysis of uncertainty on project activities affecting project duration and analysis is done using @RISK software. Different simulation results arising from three probability distribution functions are compiled to benefit construction project managers to plan more realistic schedule of various construction activities as well as project completion to document in the contract and avoid compensations or claims arising out of missing the planned schedule.

Keywords: construction project, distributions, project schedule, uncertainty

Procedia PDF Downloads 350
44829 Empirical Investigation of Gender Differences in Information Processing Style, Tinkering, and Self-Efficacy for Robot Tele-Operation

Authors: Dilruba Showkat, Cindy Grimm

Abstract:

As robots become more ubiquitous, it is significant for us to understand how different groups of people respond to possible ways of interacting with the robot. In this study, we focused on gender differences while users were tele-operating a humanoid robot that was physically co-located with them. We investigated three factors during the human-robot interaction (1) information processing strategy (2) self-efficacy and (3) tinkering or exploratory behavior. The experimental results show that the information on how to use the robot was processed comprehensively by the female participants whereas males processed them selectively (p < 0.001). Males were more confident when using the robot than females (p = 0.0002). Males tinkered more with the robot than females (p = 0.0021). We found that tinkering was positively correlated (p = 0.0068) with task success and negatively correlated (p = 0.0032) with task completion time. Tinkering might have resulted in greater task success and lower task completion time for males. Findings from this research can be used for making design decisions for robots and open new research directions. Our results show the importance of accounting for gender differences when developing interfaces for interacting with robots and open new research directions.

Keywords: humanoid robots, tele-operation, gender differences, human-robot interaction

Procedia PDF Downloads 167
44828 Personalized Learning: An Analysis Using Item Response Theory

Authors: A. Yacob, N. Hj. Ali, M. H. Yusoff, M. Y. MohdSaman, W. M. A. F. W. Hamzah

Abstract:

Personalized learning becomes increasingly popular which not is restricted by time, place or any other barriers. This study proposes an analysis of Personalized Learning using Item Response Theory which considers course material difficulty and learner ability. The study investigates twenty undergraduate students at TATI University College, who are taking programming subject. By using the IRT, it was found that, finding the most appropriate problem levels to each student include high and low level test items together is not a problem. Thus, the student abilities can be asses more accurately and fairly. Learners who experience more anxiety will affect a heavier cognitive load and receive lower test scores. Instructors are encouraged to provide a supportive learning environment to enhance learning effectiveness because Cognitive Load Theory concerns the limited capacity of the brain to absorb new information.

Keywords: assessment, item response theory, cognitive load theory, learning, motivation, performance

Procedia PDF Downloads 317
44827 Development of a Methodology for Surgery Planning and Control: A Management Approach to Handle the Conflict of High Utilization and Low Overtime

Authors: Timo Miebach, Kirsten Hoeper, Carolin Felix

Abstract:

In times of competitive pressures and demographic change, hospitals have to reconsider their strategies as a company. Due to the fact, that operations are one of the main income and one of the primary cost drivers otherwise, a process-oriented approach and an efficient use of resources seems to be the right way for getting a consistent market position. Thus, the efficient operation room occupancy planning is an important cause variable for the success and continued the existence of these institutions. A high utilization of resources is essential. This means a very high, but nevertheless sensible capacity-oriented utilization of working systems that can be realized by avoiding downtimes and a thoughtful occupancy planning. This engineering approach should help hospitals to reach her break-even point. Firstly, the aim is to establish a strategy point, which can be used for the generation of a planned throughput time. Secondly, the operation planning and control should be facilitated and implemented accurately by the generation of time modules. More than 100,000 data records of the Hannover Medical School were analyzed. The data records contain information about the type of conducted operation, the duration of the individual process steps, and all other organizational-specific data such as an operating room. Based on the aforementioned data base, a generally valid model was developed by an analysis to define a strategy point which takes the conflict of capacity utilization and low overtime into account. Furthermore, time modules were generated in this work, which allows a simplified and flexible operation planning and control for the operation manager. By the time modules, it is possible to reduce a high average value of the idle times of the operation rooms. Furthermore, the potential is used to minimize the idle time spread.

Keywords: capacity, operating room, surgery planning and control, utilization

Procedia PDF Downloads 252
44826 Building Information Modelling Implementation in the Lifecycle of Sustainable Buildings

Authors: Scarlet Alejandra Romano, Joni Kareco

Abstract:

The three pillars of sustainability (social, economic and environmental) are relevant concepts to the Architecture, Engineering, and Construction (AEC) industry because of the increase of international agreements and guidelines related to this topic during the last years. Considering these three pillars, the AEC industry faces important challenges, for instance, to decrease the carbon emissions (environmental challenge), design sustainable spaces for people (social challenge), and improve the technology of this field to reduce costs and environmental problems (economic and environmental challenge). One alternative to overcome these challenges is Building Information Modelling program (BIM) because according to several authors, this technology improves the performance of the sustainable buildings in all their lifecycle phases. The main objective of this paper is to explore and analyse the current advantages and disadvantages of the BIM implementation in the life-cycle of sustainable buildings considering the three pillars of sustainability as analysis parameters. The methodology established to achieve this objective is exploratory-descriptive with the literature review technique. The partial results illustrate that despite the BIM disadvantages and the lack of information about its social sustainability advantages, this software represents a significant opportunity to improve the three sustainable pillars of the sustainable buildings.

Keywords: building information modelling, building lifecycle analysis, sustainability, sustainable buildings

Procedia PDF Downloads 186
44825 A Model to Assist Military Mission Planners in Identifying and Assessing Variables Impacting Food Security

Authors: Lynndee Kemmet

Abstract:

The U.S. military plays an increasing role in supporting political stability efforts, and this includes efforts to prevent the food insecurity that can trigger political and social instability. This paper presents a model that assists military commanders in identifying variables that impact food production and distribution in their areas of operation (AO), in identifying connections between variables and in assessing the impacts of those variables on food production and distribution. Through use of the model, military units can better target their data collection efforts and can categorize and analyze data within the data categorization framework most widely-used by military forces—PMESII-PT (Political, Military, Economic, Infrastructure, Information, Physical Environment and Time). The model provides flexibility of analysis in that commanders can target analysis to be highly focused on a specific PMESII-PT domain or variable or conduct analysis across multiple PMESII-PT domains. The model is also designed to assist commanders in mapping food systems in their AOs and then identifying components of those systems that must be strengthened or protected.

Keywords: food security, food system model, political stability, US Military

Procedia PDF Downloads 196
44824 Design of Effective Decoupling Point in Build-To-Order Systems: Focusing on Trade-Off Relation between Order-To-Delivery Lead Time and Work in Progress

Authors: Zhiyong Li, Hiroshi Katayama

Abstract:

Since 1990s, e-commerce and internet business have been grown gradually over the word and customers tend to express their demand attributes in terms of specification requirement on parts, component, product structure etc. This paper deals with designing effective decoupling points for build to order systems under e-commerce environment, which can be realized through tradeoff relation analysis between two major criteria, customer order lead time and value of work in progress. These KPIs are critical for successful BTO business, namely time-based service effectiveness on coping with customer requirements for the first issue and cost effective ness with risk aversive operations for the second issue. Approach of this paper consists of investigation of successful business standing for BTO scheme, manufacturing model development of this scheme, quantitative evaluation of proposed models by calculation of two KPI values under various decoupling point distributions and discussion of the results brought by pattern of decoupling point distribution, where some cases provide the pareto optimum performances. To extract the relevant trade-off relation between considered KPIs among 2-dimensional resultant performance, useful logic developed by former research work, i.e. Katayama and Fonseca, is applied. Obtained characteristics are evaluated as effective information for managing BTO manufacturing businesses.

Keywords: build-to-order (BTO), decoupling point, e-commerce, order-to-delivery lead time (ODLT), work in progress (WIP)

Procedia PDF Downloads 325
44823 Evaluation of Polymerisation Shrinkage of Randomly Oriented Micro-Sized Fibre Reinforced Dental Composites Using Fibre-Bragg Grating Sensors and Their Correlation with Degree of Conversion

Authors: Sonam Behl, Raju, Ginu Rajan, Paul Farrar, B. Gangadhara Prusty

Abstract:

Reinforcing dental composites with micro-sized fibres can significantly improve the physio-mechanical properties of dental composites. The short fibres can be oriented randomly within dental composites, thus providing quasi-isotropic reinforcing efficiency unlike unidirectional/bidirectional fibre reinforced composites enhancing anisotropic properties. Thus, short fibres reinforced dental composites are getting popular among practitioners. However, despite their popularity, resin-based dental composites are prone to failure on account of shrinkage during photo polymerisation. The shrinkage in the structure may lead to marginal gap formation, causing secondary caries, thus ultimately inducing failure of the restoration. The traditional methods to evaluate polymerisation shrinkage using strain gauges, density-based measurements, dilatometer, or bonded-disk focuses on average value of volumetric shrinkage. Moreover, the results obtained from traditional methods are sensitive to the specimen geometry. The present research aims to evaluate the real-time shrinkage strain at selected locations in the material with the help of optical fibre Bragg grating (FBG) sensors. Due to the miniature size (diameter 250 µm) of FBG sensors, they can be easily embedded into small samples of dental composites. Furthermore, an FBG array into the system can map the real-time shrinkage strain at different regions of the composite. The evaluation of real-time monitoring of shrinkage values may help to optimise the physio-mechanical properties of composites. Previously, FBG sensors have been able to rightfully measure polymerisation strains of anisotropic (unidirectional or bidirectional) reinforced dental composites. However, very limited study exists to establish the validity of FBG based sensors to evaluate volumetric shrinkage for randomly oriented fibres reinforced composites. The present study aims to fill this research gap and is focussed on establishing the usage of FBG based sensors for evaluating the shrinkage of dental composites reinforced with randomly oriented fibres. Three groups of specimens were prepared by mixing the resin (80% UDMA/20% TEGDMA) with 55% of silane treated BaAlSiO₂ particulate fillers or by adding 5% of micro-sized fibres of diameter 5 µm, and length 250/350 µm along with 50% of silane treated BaAlSiO₂ particulate fillers into the resin. For measurement of polymerisation shrinkage strain, an array of three fibre Bragg grating sensors was embedded at a depth of 1 mm into a circular Teflon mould of diameter 15 mm and depth 2 mm. The results obtained are compared with the traditional method for evaluation of the volumetric shrinkage using density-based measurements. Degree of conversion was measured using FTIR spectroscopy (Spotlight 400 FT-IR from PerkinElmer). It is expected that the average polymerisation shrinkage strain values for dental composites reinforced with micro-sized fibres can directly correlate with the measured degree of conversion values, implying that more C=C double bond conversion to C-C single bond values also leads to higher shrinkage strain within the composite. Moreover, it could be established the photonics approach could help assess the shrinkage at any point of interest in the material, suggesting that fibre-Bragg grating sensors are a suitable means for measuring real-time polymerisation shrinkage strain for randomly fibre reinforced dental composites as well.

Keywords: dental composite, glass fibre, polymerisation shrinkage strain, fibre-Bragg grating sensors

Procedia PDF Downloads 154
44822 Determination of Surface Deformations with Global Navigation Satellite System Time Series

Authors: Ibrahim Tiryakioglu, Mehmet Ali Ugur, Caglar Ozkaymak

Abstract:

The development of GNSS technology has led to increasingly widespread and successful applications of GNSS surveys for monitoring crustal movements. However, multi-period GPS survey solutions have not been applied in monitoring vertical surface deformation. This study uses long-term GNSS time series that are required to determine vertical deformations. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create a coordinate time series. With the time series analyses, the GNSS stations’ behavior models (linear, periodical, etc.), the causes of these behaviors, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations shows approximately 50-80 mm/yr vertical movement.

Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations

Procedia PDF Downloads 165
44821 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
44820 Health Information Technology in Developing Countries: A Structured Literature Review with Reference to the Case of Libya

Authors: Haythem A. Nakkas, Philip J. Scott, Jim S. Briggs

Abstract:

This paper reports a structured literature review of the application of Health Information Technology in developing countries, defined as the World Bank categories Low-income countries, Lower-middle-income, and Upper-middle-income countries. The aim was to identify and classify the various applications of health information technology to assess its current state in developing countries and explore potential areas of research. We offer specific analysis and application of HIT in Libya as one of the developing countries. Method: A structured literature review was conducted using the following online databases: IEEE, Science Direct, PubMed, and Google Scholar. Publication dates were set for 2000-2013. For the PubMed search, publications in English, French, and Arabic were specified. Using a content analysis approach, 159 papers were analyzed and a total number of 26 factors were identified that affect the adoption of health information technology. Results: Of the 2681 retrieved articles, 159 met the inclusion criteria which were carefully analyzed and classified. Conclusion: The implementation of health information technology across developing countries is varied. Whilst it was initially expected financial constraints would have severely limited health information technology implementation, some developing countries like India have nevertheless dominated the literature and taken the lead in conducting scientific research. Comparing the number of studies to the number of countries in each category, we found that Low-income countries and Lower-middle-income had more studies carried out than Upper-middle-income countries. However, whilst IT has been used in various sectors of the economy, the healthcare sector in developing countries is still failing to benefit fully from the potential advantages that IT can offer.

Keywords: developing countries, developed countries, factors, failure, health information technology, implementation, libya, success

Procedia PDF Downloads 361
44819 Sensing of Cancer DNA Using Resonance Frequency

Authors: Sungsoo Na, Chanho Park

Abstract:

Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.

Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer

Procedia PDF Downloads 233
44818 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification

Authors: Zin Mar Lwin

Abstract:

Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods.

Keywords: BCI, EEG, ICA, SVM

Procedia PDF Downloads 278
44817 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects

Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed

Abstract:

Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.

Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis

Procedia PDF Downloads 376
44816 Organisational Disclosure: Threats to Individuals' Privacy

Authors: N. A. Badrul

Abstract:

People are concerned that they are vulnerable as a result of what is exposed about them on the internet. Users are increasingly aware of their privacy and are making various efforts to protect their personal information. However, besides individuals themselves, organisations are also exposing personal information of their staff to the general public by publishing it on their official website. This practice may put individuals at risk and particularly vulnerable to threats. This preliminary study explores explicitly the amount and types of personal information disclosure from organisational websites. Threats and risks related to the disclosures are discussed. In general, all the examined organisational websites discloses personal information with varies identifiable degree of data.

Keywords: personal information, privacy, e-government, information disclosure

Procedia PDF Downloads 318
44815 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer

Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe

Abstract:

The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.

Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 114
44814 Financial Information Transparency on Investor Behavior in the Private Company in Dusit Area

Authors: Yosapon Kidsuntad

Abstract:

The purpose of this dissertation was to explore the relationship between financial transparency and investor behavior. In carrying out this inquiry, the researcher used a questionnaire was utilized as a tool to collect data. Statistics utilized in this research included frequency, percentage, mean, standard deviation, and multiple regression analysis. The results revealed that there are significant differences investor perceptions of the different dimensions of financial information transparency. These differences correspond to demographical variables with the exception of the educational level variable. It was also found that there are relationships between investor perceptions of the dimensions of financial information transparency and investor behavior in the private company in Dusit Area. Finally, the researcher also found that there are differences in investor behavior corresponding to different categories of investor experience.

Keywords: financial information transparency, investor behavior, private company, Dusit Area

Procedia PDF Downloads 331
44813 Trends, Status, and Future Directions of Artificial Intelligence in Human Resources Disciplines: A Bibliometric Analysis

Authors: Gertrude I. Hewapathirana, Loi A. Nguyen, Mohammed M. Mostafa

Abstract:

Artificial intelligence (AI) technologies and tools are swiftly integrating into many functions of all organizations as a competitive drive to enhance innovations, productivity, efficiency, faster and precise decision making to keep up with rapid changes in the global business arena. Despite increasing research on AI technologies in production, manufacturing, and information management, AI in human resource disciplines is still lagging. Though a few research studies on HR informatics, recruitment, and HRM in general, how to integrate AI in other HR functional disciplines (e.g., compensation, training, mentoring and coaching, employee motivation) is rarely researched. Many inconsistencies of research hinder developing up-to-date knowledge on AI in HR disciplines. Therefore, exploring eight research questions, using bibliometric network analysis combined with a meta-analysis of published research literature. The authors attempt to generate knowledge on the role of AI in improving the efficiency of HR functional disciplines. To advance the knowledge for the benefit of researchers, academics, policymakers, and practitioners, the study highlights the types of AI innovations and outcomes, trends, gaps, themes and topics, fast-moving disciplines, key players, and future directions.AI in HR informatics in high tech firms is the dominant theme in many research publications. While there is increasing attention from researchers and practitioners, there are many gaps between the promise, potential, and real AI applications in HR disciplines. A higher knowledge gap raised many unanswered questions regarding legal, ethical, and morale aspects of AI in HR disciplines as well as the potential contributions of AI in HR disciplines that may guide future research directions. Though the study provides the most current knowledge, it is limited to peer-reviewed empirical, theoretical, and conceptual research publications stored in the WoS database. The implications for theory, practice, and future research are discussed.

Keywords: artificial intelligence, human resources, bibliometric analysis, research directions

Procedia PDF Downloads 97
44812 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 142
44811 16s rRNA Based Metagenomic Analysis of Palm Sap Samples From Bangladesh

Authors: Ágota Ábrahám, Md Nurul Islam, Karimane Zeghbib, Gábor Kemenesi, Sazeda Akter

Abstract:

Collecting palm sap as a food source is an everyday practice in some parts of the world. However, the consumption of palm juice has been associated with regular infections and epidemics in parts of Bangladesh. This is attributed to fruit-eating bats and other vertebrates or invertebrates native to the area, contaminating the food with their body secretions during the collection process. The frequent intake of palm juice, whether as a processed food product or in its unprocessed form, is a common phenomenon in large areas. The range of pathogens suitable for human infection resulting from this practice is not yet fully understood. Additionally, the high sugar content of the liquid makes it an ideal culture medium for certain bacteria, which can easily propagate and potentially harm consumers. Rapid diagnostics, especially in remote locations, could mitigate health risks associated with palm juice consumption. The primary objective of this research is the rapid genomic detection and risk assessment of bacteria that may cause infections in humans through the consumption of palm juice. Utilizing state-of-the-art third-generation Nanopore metagenomic sequencing technology based on 16S rRNA, and identified bacteria primarily involved in fermenting processes. The swift metagenomic analysis, coupled with the widespread availability and portability of Nanopore products (including real-time analysis options), proves advantageous for detecting harmful pathogens in food sources without relying on extensive industry resources and testing.

Keywords: raw date palm sap, NGS, metabarcoding, food safety

Procedia PDF Downloads 55
44810 Emotion Oriented Students' Opinioned Topic Detection for Course Reviews in Massive Open Online Course

Authors: Zhi Liu, Xian Peng, Monika Domanska, Lingyun Kang, Sannyuya Liu

Abstract:

Massive Open education has become increasingly popular among worldwide learners. An increasing number of course reviews are being generated in Massive Open Online Course (MOOC) platform, which offers an interactive feedback channel for learners to express opinions and feelings in learning. These reviews typically contain subjective emotion and topic information towards the courses. However, it is time-consuming to artificially detect these opinions. In this paper, we propose an emotion-oriented topic detection model to automatically detect the students’ opinioned aspects in course reviews. The known overall emotion orientation and emotional words in each review are used to guide the joint probabilistic modeling of emotion and aspects in reviews. Through the experiment on real-life review data, it is verified that the distribution of course-emotion-aspect can be calculated to capture the most significant opinioned topics in each course unit. This proposed technique helps in conducting intelligent learning analytics for teachers to improve pedagogies and for developers to promote user experiences.

Keywords: Massive Open Online Course (MOOC), course reviews, topic model, emotion recognition, topical aspects

Procedia PDF Downloads 262
44809 Traffic Analysis and Prediction Using Closed-Circuit Television Systems

Authors: Aragorn Joaquin Pineda Dela Cruz

Abstract:

Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps.

Keywords: intelligent transportation system, vehicle detection, traffic analysis, deep learning, machine learning, computer vision, traffic prediction

Procedia PDF Downloads 102
44808 A Comparative Assessment of Membrane Bioscrubber and Classical Bioscrubber for Biogas Purification

Authors: Ebrahim Tilahun, Erkan Sahinkaya, Bariş Calli̇

Abstract:

Raw biogas is a valuable renewable energy source however it usually needs removal of the impurities. The presence of hydrogen sulfide (H2S) in the biogas has detrimental corrosion effects on the cogeneration units. Removal of H2S from the biogas can therefore significantly improve the biogas quality. In this work, a conventional bioscrubber (CBS), and a dense membrane bioscrubber (DMBS) were comparatively evaluated in terms of H2S removal efficiency (RE), CH4 enrichment and alkaline consumption at gas residence times ranging from 5 to 20 min. Both bioscrubbers were fed with a synthetic biogas containing H2S (1%), CO2 (39%) and CH4 (60%). The results show that high RE (98%) was obtained in the DMBS when gas residence time was 20 min, whereas slightly lower CO2 RE was observed. While in CBS system the outlet H2S concentration was always lower than 250 ppmv, and its H2S RE remained higher than 98% regardless of the gas residence time, although the high alkaline consumption and frequent absorbent replacement limited its cost-effectiveness. The result also indicates that in DMBS when the gas residence time increased to 20 min, the CH4 content in the treated biogas enriched upto 80%. However, while operating the CBS unit the CH4 content of the raw biogas (60%) decreased by three fold. The lower CH4 content in CBS was probably caused by extreme dilution of biogas with air (N2 and O2). According to the results obtained here the DMBS system is a robust and effective biotechnology in comparison with CBS. Hence, DMBS has a better potential for real scale applications.

Keywords: biogas, bioscrubber, desulfurization, PDMS membrane

Procedia PDF Downloads 226
44807 Quantitative Analysis of Orphan Nuclear Receptors in Insulin Resistant C2C12 Skeletal Muscle Cells

Authors: Masocorro Gawned, Stephen Myers, Guat Siew Chew

Abstract:

Nuclear Receptors (NR) are a super family of transcription factors that play a major role in lipid and glucose metabolism in skeletal muscle. Recently, pharmacological evidence supports the view that stimulation of nuclear receptors alleviates Type 2 Diabetes (T2D). The orphan nuclear receptors (ONR) are members of the nuclear receptor (NR) superfamily whose ligands and physiological functions remain unknown. To date, no systematic studies have been carried out to screen for ONRs expressed in insulin resistant (IR) skeletal muscle cells. Therefore, in this study, we have established a model for IR by treating C2C12 skeletal muscle cells with insulin (10nM) for 48 hours. Western Blot analysis of phosphorylated AKT confirmed IR. Real-time quantitative polymerase chain reaction (qPCR) results highlighted key ONRs including NUR77 (NR4A1), NURR1 (NR4A2) and NOR1 (NR4A3) which have been associated with fatty acid oxidation regulation and glucose homeostasis. Increased mRNA expression levels of estrogen-related receptors (ERRs), REV-ERBα, NUR77, NURR1, NOR1, in insulin resistant C2C12 skeletal muscle cells, indicated that these ONRs could potentially play a pivotal regulatory role of insulin secretion in lipid metabolism. Taken together, this study has successfully contributed to the complete analysis of ONR in IR, and has filled in an important void in the study and treatment of T2D.

Keywords: type 2 diabetes, orphan nuclear receptors, transcription receptors, quantitative mRNA expression

Procedia PDF Downloads 427
44806 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref

Authors: Kazuki Kohama, Hiroko Ono

Abstract:

The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.

Keywords: disaster prevention, water disaster, river flood, GIS software

Procedia PDF Downloads 137