Search results for: Long Short Term Memory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9785

Search results for: Long Short Term Memory

9365 Rapid Start-Up and Efficient Long-Term Nitritation of Low Strength Ammonium Wastewater with a Sequencing Batch Reactor Containing Immobilized Cells

Authors: Hammad Khan, Wookeun Bae

Abstract:

Major concerns regarding nitritation of low-strength ammonium wastewaters include low ammonium loading rates (usually below 0.2 kg/m3-d) and uncertainty about long-term stability of the process. The purpose of this study was to test a sequencing batch reactor (SBR) filled with cell-immobilized polyethylene glycol (PEG) pellets to see if it could achieve efficient and stable nitritation under various environmental conditions. SBR was fed with synthetic ammonium wastewater of 30±2 mg-N/L and pH: 8±0.05, maintaining the dissolved oxygen concentration of 1.7±0.2 mg/L and the temperature at 30±1oC. The reaction was easily converted to partial nitrification mode within a month by feeding relatively high ammonium substrate (~100 mg-N/L) in the beginning. We observed stable nitritation over 300 days with high ammonium loading rates (as high as ~1.1 kg-N/m3-d), nitrite accumulation rates (mostly over 97%) and ammonium removal rate (mostly over 95%). DO was a major limiting substrate when the DO concentration was below ~4 mg/L and the NH4+-N concentration was above 5 mg/L, giving almost linear increase in the ammonium oxidation rate with the bulk DO increase. Low temperatures mainly affected the reaction rate, which could be compensated for by increasing the pellet volume (i.e. biomass). Our results demonstrated that an SBR filled with small cell-immobilized PEG pellets could achieve very efficient and stable nitritation of a low-strength ammonium wastewater.

Keywords: ammonium loading rate (ALR), cell-immobilization, long-term nitritation, sequencing batch reactor (SBR), sewage treatment

Procedia PDF Downloads 251
9364 Possible Endocrinal and Liver Enzymes Toxicities Associated with Long Term Exposure to Benzene in Saudi Arabia

Authors: Faizah Asiri, Mohammed Fathy, Saeed Alghamdi, Nahlah Ayoub, Faisal Asiri

Abstract:

Background: - The strategies for this study were based on the toxic effect of long-term inhalation of Benzene on hormones and liver enzymes and various parameters related to it. The following databases were searched: benzene, hepatotoxic, benzene metabolism, hormones, testosterone, hemotoxic, and prolonged exposure. A systematic strategy is designed to search the literature that links benzene with the multiplicity and different types of intoxication or the medical abbreviations of diseases relevant to benzene exposure. Evidence suggests that getting rid of inhaled gasoline is by exhalation. Absorbed benzene is metabolized by giving phenolic acid as well as meconic acid, followed by urinary excretion of conjugate sulfates and glucuronides. Materials and Methods :- This work was conducted in the Al-Khadra laboratory in Taif 2020/2021 and aimed to measure some of the possible endocrinal and liver toxicities associated with benzene's long-term exposure in Saudi Arabia at the station workers who are considered the most exposed category to gasoline. One hundred ten station workers were included in this study. They were divided into four patient groups according to the chronic exposure rate to benzene, one control group, and three other groups of exposures. As follows: patient Group 1 (controlled group), patient Group 2 (exposed less than 1y), patient Group 3 (exposed 1-5 y), patient Group 4 (more than 5). Each group is compared with blood sample parameters (ALT, FSH and Testosterone, TSH). Blood samples were drawn from the participants, and statistical tests were performed. Significant change (p≤0.05) was examined compared to the control group. Workers' exposure to benzene led to a significant change in hematological, hormonal, and hepatic factors compared to the control group. Results:- The results obtained a relationship between long-term exposure to benzene and a decrease in the level of testosterone and FSH hormones, including that it poses a toxic risk in the long term (p≤0.05) when compared to the control. We obtained results confirming that there is no significant coloration between years of exposure and TSH level (p≤0.05) when compared to the control. Conclusion:- We conclude that some hormones and liver enzymes are affected by chronic doses of benzene through inhalation after our study was on the group most exposed to benzene, which is gas station workers.

Keywords: toxicities, benzene, hormones, station workers

Procedia PDF Downloads 63
9363 Comparison of Stereotactic Craniotomy for Brain Metastasis, as Compared to Stereotactic Radiosurgery

Authors: Mostafa El Khashab

Abstract:

Our experience with 50 patients with metastatic tumors located in different locations of the brain by a stereotactic-guided craniotomy and total microsurgical resection. Patients ranged in age from 36 to 73 years. There were 28 women and 22 men. Thirty-four patients presented with hemiparesis and 6 with aphasia and the remaining presented with psychological manifestations and memory issues. Gross total resection was accomplished in all cases, with postoperative imaging confirmation of complete removal. Forty patients were subjected to whole brain irradiation. One patient developed a stroke postoperatively and another one had a flap infection. 4 patients developed different postoperative but unrelated morbidities, including pneumonia and DVT. No mortality was encountered. We believe that with the assistance of stereotactic localization, metastases in vital regions of the brain can be removed with very low neurologic morbidity and that, in comparison to other modalities, they fare better regarding their long-term outcome.

Keywords: stereotactic, craniotomy, radiosurgery, patient

Procedia PDF Downloads 67
9362 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.

Keywords: mathematical expectation, filtration, anomalous noise, memory

Procedia PDF Downloads 216
9361 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach

Authors: Godwin Chigozie Okpara

Abstract:

This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.

Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models

Procedia PDF Downloads 420
9360 Conventional Four Steps Travel Demand Modeling for Kabul New City

Authors: Ahmad Mansoor Stanikzai, Yoshitaka Kajita

Abstract:

This research is a very essential towards transportation planning of Kabul New City. In this research, the travel demand of Kabul metropolitan area (Existing and Kabul New City) are evaluated for three different target years (2015, current, 2025, mid-term, 2040, long-term). The outcome of this study indicates that, though currently the vehicle volume is less the capacity of existing road networks, Kabul city is suffering from daily traffic congestions. This is mainly due to lack of transportation management, the absence of proper policies, improper public transportation system and violation of traffic rules and regulations by inhabitants. On the other hand, the observed result indicates that the current vehicle to capacity ratio (VCR) which is the most used index to judge traffic status in the city is around 0.79. This indicates the inappropriate traffic condition of the city. Moreover, by the growth of population in mid-term (2025) and long-term (2040) and in the case of no development in the road network and transportation system, the VCR value will dramatically increase to 1.40 (2025) and 2.5 (2040). This can be a critical situation for an urban area from an urban transportation perspective. Thus, by introducing high-capacity public transportation system and the development of road network in Kabul New City and integrating these links with the existing city road network, significant improvements were observed in the value of VCR.

Keywords: Afghanistan, Kabul new city, planning, policy, urban transportation

Procedia PDF Downloads 305
9359 Enhanced Test Scheme based on Programmable Write Time for Future Computer Memories

Authors: Nor Zaidi Haron, Fauziyah Salehuddin, Norsuhaidah Arshad, Sani Irwan Salim

Abstract:

Resistive random access memories (RRAMs) are one of the main candidates for future computer memories. However, due to their tiny size and immature device technology, the quality of the outgoing RRAM chips is seen as a serious issue. Defective RRAM cells might behave differently than existing semiconductor memories (Dynamic RAM, Static RAM, and Flash), meaning that they are difficult to be detected using existing test schemes. This paper presents an enhanced test scheme, referred to as Programmable Short Write Time (PSWT) that is able to improve the detection of faulty RRAM cells. It is developed by applying multiple weak write operations, each with different time durations. The test circuit embedded in the RRAM chip is made programmable in order to supply different weak write times during testing. The RRAM electrical model is described using Verilog-AMS language and is simulated using HSPICE simulation tools. Simulation results show that the proposed test scheme offers better open-resistive fault detection compared to existing test schemes.

Keywords: memory fault, memory test, design-for-testability, resistive random access memory

Procedia PDF Downloads 358
9358 A Panel Cointegration Analysis for Macroeconomic Determinants of International Housing Market

Authors: Mei-Se Chien, Chien-Chiang Lee, Sin-Jie Cai

Abstract:

The main purpose of this paper is to investigate the long-run equilibrium and short-run dynamics of international housing prices when macroeconomic variables change. We apply the Pedroni’s, panel cointegration, using the unbalanced panel data analysis of 33 countries over the period from 1980Q1 to 2013Q1, to examine the relationships among house prices and macroeconomic variables. Our empirical results of panel data cointegration tests support the existence of a cointegration among these macroeconomic variables and house prices. Besides, the empirical results of panel DOLS further present that a 1% increase in economic activity, long-term interest rates, and construction costs cause house prices to respectively change 2.16%, -0.04%, and 0.22% in the long run. Furthermore, the increasing economic activity and the construction cost would cause stronger impacts on the house prices for lower income countries than higher income countries. The results lead to the conclusion that policy of house prices growth can be regarded as economic growth for lower income countries. Finally, in America region, the coefficient of economic activity is the highest, which displays that increasing economic activity causes a faster rise in house prices there than in other regions. There are some special cases whereby the coefficients of interest rates are significantly positive in America and Asia regions.

Keywords: house prices, macroeconomic variables, panel cointegration, dynamic OLS

Procedia PDF Downloads 365
9357 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan

Authors: Bai-Chen Xie, Biao Li

Abstract:

With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.

Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison

Procedia PDF Downloads 76
9356 Study of Deflection at Junction in the Precast on Cyclic Loading

Authors: Jongho Park, Ui-Cheol Shin, Jinwoong Choi, Sungnam Hong, Sun-Kyu Park

Abstract:

While the numerous structures built the industrialization are aging, the effort for the maintenance is concentrated in many countries. However, the traffic jam, environmental damage, and enormous maintenance cost, and etc become a problem. So, in order to solve this, the modular bridge has been studied. This bridge is the structure which utilizes and assembles the standard precast member. Through this, the substitution of the existing bridge and advantage of the easy maintenance will be achieved. However, the reliability in the long-term behavior is insufficient due to the junction part between modular precast members. Therefore, in this research, the cyclic load loading experiment was performed on the junction and deflection was analyzed by long-term service in modular slab connection. The deflection of modular slab with junction was mostly generated when initial and final test.

Keywords: modular bridge, deflection, cyclic loading, junction

Procedia PDF Downloads 489
9355 A Case Report on Cognitive-Communication Intervention in Traumatic Brain Injury

Authors: Nikitha Francis, Anjana Hoode, Vinitha George, Jayashree S. Bhat

Abstract:

The interaction between cognition and language, referred as cognitive-communication, is very intricate, involving several mental processes such as perception, memory, attention, lexical retrieval, decision making, motor planning, self-monitoring and knowledge. Cognitive-communication disorders are difficulties in communicative competencies that result from underlying cognitive impairments of attention, memory, organization, information processing, problem solving, and executive functions. Traumatic brain injury (TBI) is an acquired, non - progressive condition, resulting in distinct deficits of cognitive communication abilities such as naming, word-finding, self-monitoring, auditory recognition, attention, perception and memory. Cognitive-communication intervention in TBI is individualized, in order to enhance the person’s ability to process and interpret information for better functioning in their family and community life. The present case report illustrates the cognitive-communicative behaviors and the intervention outcomes of an adult with TBI, who was brought to the Department of Audiology and Speech Language Pathology, with cognitive and communicative disturbances, consequent to road traffic accident. On a detailed assessment, she showed naming deficits along with perseverations and had severe difficulty in recalling the details of the accident, her house address, places she had visited earlier, names of people known to her, as well as the activities she did each day, leading to severe breakdowns in her communicative abilities. She had difficulty in initiating, maintaining and following a conversation. She also lacked orientation to time and place. On administration of the Manipal Manual of Cognitive Linguistic Abilities (MMCLA), she exhibited poor performance on tasks related to visual and auditory perception, short term memory, working memory and executive functions. She attended 20 sessions of cognitive-communication intervention which followed a domain-general, adaptive training paradigm, with tasks relevant to everyday cognitive-communication skills. Compensatory strategies such as maintaining a dairy with reminders of her daily routine, names of people, date, time and place was also recommended. MMCLA was re-administered and her performance in the tasks showed significant improvements. Occurrence of perseverations and word retrieval difficulties reduced. She developed interests to initiate her day-to-day activities at home independently, as well as involve herself in conversations with her family members. Though she lacked awareness about her deficits, she actively involved herself in all the therapy activities. Rehabilitation of moderate to severe head injury patients can be done effectively through a holistic cognitive retraining with a focus on different cognitive-linguistic domains. Selection of goals and activities should have relevance to the functional needs of each individual with TBI, as highlighted in the present case report.

Keywords: cognitive-communication, executive functions, memory, traumatic brain injury

Procedia PDF Downloads 323
9354 Long-Term Effects of Psychosocial Interventions for Adolescents on Depression and Anxiety: A Systematic Review and Meta-Analysis

Authors: Denis Duagi, Ben Carter, Maria Farrelly, Stephen Lisk, June S. L. Brown

Abstract:

Background: Adolescence represents a distinctive phase of development, and variables linked to this developmental period could affect the efficiency of prevention and treatment for depression and anxiety, as well as the long-term prognosis. The objectives of this study were to investigate the long-term effectiveness of psychosocial interventions for adolescents on depression and anxiety symptoms and to assess the influence of different intervention parameters on the long-term effects. Methods: Searches were carried out on the 11ᵗʰ of August 2022 using five databases (Cochrane Library, Embase, Medline, PsychInfo, Web of Science), as well as trial registers. Randomized controlled trials of psychosocial interventions targeting specifically adolescents were included if they assessed outcomes at 1-year post-intervention or more. The Cochrane risk of bias-2 quality assessment tool was used. The primary outcome was depression, and studies were pooled using a standardised mean difference, with an associated 95% confidence interval, p-value, and I². The study protocol was pre-registered (CRD42022348668). Findings: A total of 57 reports (n= 46,678 participants) were included in the review. Psychosocial interventions led to small reductions in depressive symptoms, with a standardised mean difference (SMD) at 1-year of -0.08 (95%CI -0.20, -0.03, p=0.002, I²=72%), 18-months SMD=-0.12, 95% CI -0.22, -0.01, p=0.03, I²=63%) and 2-years SMD=-0.12 (95% CI -0.20, -0.03, p=0.01, I²=68%). Sub-group analyses indicated that targeted interventions produced stronger effects, particularly when delivered by trained mental health professionals (K=18, SMD=-0.24, 95% CI -0.38, -0.10, p=0.001, I²=60%). No effects were detected for anxiety at any assessment. Conclusion: Psychosocial interventions specifically targeting adolescents were shown to have small but positive effects on depression symptoms but not anxiety symptoms, which were sustained for up to 2 years. These findings highlight the potential population-level preventive effects if such psychosocial interventions become widely implemented in accessible settings such as schools.

Keywords: psychosocial, adolescent, interventions, depression, anxiety, meta-analysis, randomized controlled trial

Procedia PDF Downloads 45
9353 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: deep learning, data mining, gender predication, MOOCs

Procedia PDF Downloads 117
9352 Short Term Tests on Performance Evaluation of Water-Washed and Dry-Washed Biodiesel from Used Cooking Oil

Authors: Shumani Ramuhaheli, Christopher C. Enweremadu, Hilary L. Rutto

Abstract:

In this study, biodiesel from used cooking oil was produced as purified by washing with water (water wash) and amberlite (dry wash). The work presents the results of short term tests on performance characteristics of diesel engine using both biodiesel-fuel samples. In this investigation, the water wash biodiesel and dry wash biodiesel and diesel were compared for performance using a four-cylinder diesel engine. The torque, brake power, specific fuel consumption and brake thermal efficiency were analyzed. The tests showed that in all cases, dry wash biodiesel performed marginally poorer compared to water wash biodiesel. Except for brake thermal efficiency, diesel fuel had better engine performance characteristics compared to the biodiesel-fuel samples. According to these results, dry washing of biodiesel has a marginal effect on engine performance.

Keywords: biodiesel, engine performance, used cooking oil, water wash, dry wash

Procedia PDF Downloads 336
9351 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 393
9350 Mechanism to Optimize Landing Distance in Order to Minimize Tyre Wear during Braking

Authors: H. V. H. De Soysa, N. D. Hiripitiya, H. S. U. Thrimavithana, B. R. Epitawala, K. A. D. D. Kuruppu, D. J. K. Lokupathirage

Abstract:

This research was based on developing a mechanism in order to optimize the landing distance. Short distance braking and long distance braking may cause several issues for the aircraft including tyre wearing. The worst case occurs with short distance landing. The issues related to short distance landing were identified after conducting interviews with pilots, aeronautical engineers and technicians. A model was constructed in order to optimize the landing distance. The device started to function at the point where the main wheels of the aircraft touchdown the runway. It was found that implementing this device to the aircraft benefits to optimize the landing distance. This could lead to rectifying several issues occurred due to improper braking distances.

Keywords: aircraft, mechanism, optimize landing distance, runway

Procedia PDF Downloads 293
9349 An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients

Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi

Abstract:

Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients.

Keywords: deep learning, ensemble models, imbalanced classification, lung cancer, TS patient selection

Procedia PDF Downloads 116
9348 Ternary Content Addressable Memory Cell with a Leakage Reduction Technique

Authors: Gagnesh Kumar, Nitin Gupta

Abstract:

Ternary Content Addressable Memory cells are mainly popular in network routers for packet forwarding and packet classification, but they are also useful in a variety of other applications that require high-speed table look-up. The main TCAM-design challenge is to decrease the power consumption associated with the large amount of parallel active circuitry, without compromising with speed or memory density. Furthermore, when the channel length decreases, leakage power becomes more significant, and it can even dominate dynamic power at lower technologies. In this paper, we propose a TCAM-design technique, called Virtual Power Supply technique that reduces the leakage by a substantial amount.

Keywords: match line (ML), search line (SL), ternary content addressable memory (TCAM), Leakage power (LP)

Procedia PDF Downloads 273
9347 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 104
9346 The Impacts of Export in Stimulating Economic Growth in Ethiopia: ARDL Model Analysis

Authors: Natnael Debalklie Teshome

Abstract:

The purpose of the study was to empirically investigate the impacts of export performance and its volatility on economic growth in the Ethiopian economy. To do so, time-series data of the sample period from 1974/75 – 2017/18 were collected from databases and annual reports of IMF, WB, NBE, MoFED, UNCTD, and EEA. The extended Cobb-Douglas production function of the neoclassical growth model framed under the endogenous growth theory was used to consider both the performance and instability aspects of export. First, the unit root test was conducted using ADF and PP tests, and data were found in stationery with a mix of I(0) and I(1). Then, the bound test and Wald test were employed, and results showed that there exists long-run co-integration among study variables. All the diagnostic test results also reveal that the model fulfills the criteria of the best-fitted model. Therefore, the ARDL model and VECM were applied to estimate the long-run and short-run parameters, while the Granger causality test was used to test the causality between study variables. The empirical findings of the study reveal that only export and coefficient of variation had significant positive and negative impacts on RGDP in the long run, respectively, while other variables were found to have an insignificant impact on the economic growth of Ethiopia. In the short run, except for gross capital formation and coefficients of variation, which have a highly significant positive impact, all other variables have a strongly significant negative impact on RGDP. This shows exports had a strong, significant impact in both the short-run and long-run periods. However, its positive and statistically significant impact is observed only in the long run. Similarly, there was a highly significant export fluctuation in both periods, while significant commodity concentration (CCI) was observed only in the short run. Moreover, the Granger causality test reveals that unidirectional causality running from export performance to RGDP exists in the long run and from both export and RGDP to CCI in the short run. Therefore, the export-led growth strategy should be sustained and strengthened. In addition, boosting the industrial sector is vital to bring structural transformation. Hence, the government has to give different incentive schemes and supportive measures to exporters to extract the spillover effects of exports. Greater emphasis on price-oriented diversification and specialization on major primary products that the country has a comparative advantage should also be given to reduce value-based instability in the export earnings of the country. The government should also strive to increase capital formation and human capital development via enhancing investments in technology and quality of education to accelerate the economic growth of the country.

Keywords: export, economic growth, export diversification, instability, co-integration, granger causality, Ethiopian economy

Procedia PDF Downloads 42
9345 Work Ability Program Produces Short-Term Productivity Improvements

Authors: Jukka Surakka, Risto Tuominen, Jukka Piippo

Abstract:

The aim of this work was to study the development of sick leaves and presenteeism during a work ability program. Productivity losses were determined for 70 employees from four organizations and for 42 controls. Numbers of sick leave days (SLD) were collected from employers’ records for three months before the program started and each subsequent three months for one year after the initiation. Presenteeism was determined for four weeks before and after one year of the program implementation. In the first three months of implementation SLD reduced among project members by 55% and increased by 27% among controls (p<0.001). However, during the last two measurement periods, the project subjects had more SLD than they had before the program started (p<0.001), and also more than the controls (p<0.001). Overall, during the one year implementation the program subjects had on average 23% increase in SLD, whereas the controls had 35% decrease in their SLD (p<0.001). Program participants experienced per month 3.6 hours more presenteeism after the one-year implementation and among the controls presenteeism increased by 2.5 hours. Work ability program produced short-term productivity benefits, but with longer program duration the benefits disappeared.

Keywords: work ability, absenteeism, presenteeism, productivity, sick leave

Procedia PDF Downloads 269
9344 Novel Liposomal Nanocarriers For Long-term Tumor Imaging

Authors: Mohamad Ahrari, Kayvan Sadri, Mahmoud Reza Jafari

Abstract:

PEGylated liposomes have a smaller volume of distribution and decreased clearance, consequently, due to their more prolonged presence in bloodstream and maintaining their stability during this period, these liposomes can be applied for imaging tumoral sites. The purpose of this study is to develop an appropriate radiopharmaceutical agent in long-term imaging for improved diagnosis and evaluation of tumors. In this study, liposomal formulations encapsulating albumin is synthesized by solvent evaporation method along with homogenization, and their characteristics were assessed. Then these liposomes labeled by Philips method and the rate of stability of labeled liposomes in serum, and ultimately the rate of biodistribution and gamma scintigraphy in C26-colon carcinoma tumor-bearing mice, were studied. The result of the study of liposomal characteristics displayed that capable of accumulating in tumor sites based of EPR phenomenon. these liposomes also have high stability for maintaining encapsulated albumin in a long time. In the study of biodistribution of these liposomes in mice, they accumulated more in the kidney, liver, spleen, and tumor sites, which, even after clearing formulations in the bloodstream, they existed in high levels in these organs up to 96 hours. In gamma scintigraphy also, organs with high activity accumulation from early hours up to 96 hours were visible in the form of hot spots. concluded that PEGylated liposomal formulation encapsulating albumin can be labeled with In-Oxine, and obtained stabilized formulation for long-term imaging, that have more favorable conditions for the evaluation of tumors and it will cause early diagnosis of tumors.

Keywords: nano liposome, 111In-oxine, imaging, biodistribution, tumor

Procedia PDF Downloads 87
9343 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.

Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture

Procedia PDF Downloads 259
9342 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 16
9341 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis

Procedia PDF Downloads 628
9340 The Ontological Memory in Bergson as a Conceptual Tool for the Analysis of the Digital Conjuncture

Authors: Douglas Rossi Ramos

Abstract:

The current digital conjuncture, called by some authors as 'Internet of Things' (IoT), 'Web 2.0' or even 'Web 3.0', consists of a network that encompasses any communication of objects and entities, such as data, information, technologies, and people. At this juncture, especially characterized by an "object socialization," communication can no longer be represented as a simple informational flow of messages from a sender, crossing a channel or medium, reaching a receiver. The idea of communication must, therefore, be thought of more broadly in which it is possible to analyze the process communicative from interactions between humans and nonhumans. To think about this complexity, a communicative process that encompasses both humans and other beings or entities communicating (objects and things), it is necessary to constitute a new epistemology of communication to rethink concepts and notions commonly attributed to humans such as 'memory.' This research aims to contribute to this epistemological constitution from the discussion about the notion of memory according to the complex ontology of Henri Bergson. Among the results (the notion of memory in Bergson presents itself as a conceptual tool for the analysis of posthumanism and the anthropomorphic conjuncture of the new advent of digital), there was the need to think about an ontological memory, analyzed as a being itself (being itself of memory), as a strategy for understanding the forms of interaction and communication that constitute the new digital conjuncture, in which communicating beings or entities tend to interact with each other. Rethinking the idea of communication beyond the dimension of transmission in informative sequences paves the way for an ecological perspective of the digital dwelling condition.

Keywords: communication, digital, Henri Bergson, memory

Procedia PDF Downloads 134
9339 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study

Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.

Keywords: balance, joint position sense, proprioception, whole body vibration

Procedia PDF Downloads 310
9338 Enhancement of Long Term Peak Demand Forecast in Peninsular Malaysia Using Hourly Load Profile

Authors: Nazaitul Idya Hamzah, Muhammad Syafiq Mazli, Maszatul Akmar Mustafa

Abstract:

The peak demand forecast is crucial to identify the future generation plant up needed in the long-term capacity planning analysis for Peninsular Malaysia as well as for the transmission and distribution network planning activities. Currently, peak demand forecast (in Mega Watt) is derived from the generation forecast by using load factor assumption. However, a forecast using this method has underperformed due to the structural changes in the economy, emerging trends and weather uncertainty. The dynamic changes of these drivers will result in many possible outcomes of peak demand for Peninsular Malaysia. This paper will look into the independent model of peak demand forecasting. The model begins with the selection of driver variables to capture long-term growth. This selection and construction of variables, which include econometric, emerging trend and energy variables, will have an impact on the peak forecast. The actual framework begins with the development of system energy and load shape forecast by using the system’s hourly data. The shape forecast represents the system shape assuming all embedded technology and use patterns to continue in the future. This is necessary to identify the movements in the peak hour or changes in the system load factor. The next step would be developing the peak forecast, which involves an iterative process to explore model structures and variables. The final step is combining the system energy, shape, and peak forecasts into the hourly system forecast then modifying it with the forecast adjustments. Forecast adjustments are among other sales forecasts for electric vehicles, solar and other adjustments. The framework will result in an hourly forecast that captures growth, peak usage and new technologies. The advantage of this approach as compared to the current methodology is that the peaks capture new technology impacts that change the load shape.

Keywords: hourly load profile, load forecasting, long term peak demand forecasting, peak demand

Procedia PDF Downloads 136
9337 Machine Learning Assisted Performance Optimization in Memory Tiering

Authors: Derssie Mebratu

Abstract:

As a large variety of micro services, web services, social graphic applications, and media applications are continuously developed, it is substantially vital to design and build a reliable, efficient, and faster memory tiering system. Despite limited design, implementation, and deployment in the last few years, several techniques are currently developed to improve a memory tiering system in a cloud. Some of these techniques are to develop an optimal scanning frequency; improve and track pages movement; identify pages that recently accessed; store pages across each tiering, and then identify pages as a hot, warm, and cold so that hot pages can store in the first tiering Dynamic Random Access Memory (DRAM) and warm pages store in the second tiering Compute Express Link(CXL) and cold pages store in the third tiering Non-Volatile Memory (NVM). Apart from the current proposal and implementation, we also develop a new technique based on a machine learning algorithm in that the throughput produced 25% improved performance compared to the performance produced by the baseline as well as the latency produced 95% improved performance compared to the performance produced by the baseline.

Keywords: machine learning, bayesian optimization, memory tiering, CXL, DRAM

Procedia PDF Downloads 74
9336 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 326