Search results for: time deadline
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18158

Search results for: time deadline

16928 Does Pakistan Stock Exchange Offer Diversification Benefits to Regional and International Investors: A Time-Frequency (Wavelets) Analysis

Authors: Syed Jawad Hussain Shahzad, Muhammad Zakaria, Mobeen Ur Rehman, Saniya Khaild

Abstract:

This study examines the co-movement between the Pakistan, Indian, S&P 500 and Nikkei 225 stock markets using weekly data from 1998 to 2013. The time-frequency relationship between the selected stock markets is conducted by using measures of continuous wavelet power spectrum, cross-wavelet transform and cross (squared) wavelet coherency. The empirical evidence suggests strong dependence between Pakistan and Indian stock markets. The co-movement of Pakistani index with U.S and Japanese, the developed markets, varies over time and frequency where the long-run relationship is dominant. The results of cross wavelet and wavelet coherence analysis indicate moderate covariance and correlation between stock indexes and the markets are in phase (i.e. cyclical in nature) over varying durations. Pakistan stock market was lagging during the entire period in relation to Indian stock market, corresponding to the 8~32 and then 64~256 weeks scale. Similar findings are evident for S&P 500 and Nikkei 225 indexes, however, the relationship occurs during the later period of study. All three wavelet indicators suggest strong evidence of higher co-movement during 2008-09 global financial crises. The empirical analysis reveals a strong evidence that the portfolio diversification benefits vary across frequencies and time. This analysis is unique and have several practical implications for regional and international investors while assigning the optimal weightage of different assets in portfolio formulation.

Keywords: co-movement, Pakistan stock exchange, S&P 500, Nikkei 225, wavelet analysis

Procedia PDF Downloads 357
16927 General Network with Four Nodes and Four Activities with Triangular Fuzzy Number as Activity Times

Authors: Rashmi Tamhankar, Madhav Bapat

Abstract:

In many projects, we have to use human judgment for determining the duration of the activities which may vary from person to person. Hence, there is vagueness about the time duration for activities in network planning. Fuzzy sets can handle such vague or imprecise concepts and has an application to such network. The vague activity times can be represented by triangular fuzzy numbers. In this paper, a general network with fuzzy activity times is considered and conditions for the critical path are obtained also we compute total float time of each activity. Several numerical examples are discussed.

Keywords: PERT, CPM, triangular fuzzy numbers, fuzzy activity times

Procedia PDF Downloads 473
16926 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 106
16925 Cyclic Heating Effect on Hardness of Copper

Authors: Tahany W. Sadak

Abstract:

Presented work discusses research results concerning the effect of the heat treatment process. Thermal fatigue which expresses repeated heating and cooling processes affect the ductility or the brittleness of the material. In this research, 70 specimens of copper (1.5 mm thickness, 85 mm length, 32 mm width) are subjected to thermal fatigue at different conditions. Heating temperatures Th are 100, 300 and 500 °C. Number of repeated cycles N is from 1 to 100. Heating time th =600 Sec, and Cooling time; tC= 900 Sec.  Results are evaluated and then compared to each other and to that of specimens without subjected to thermal fatigue.

Keywords: copper, thermal analysis, heat treatment, hardness, thermal fatigue

Procedia PDF Downloads 434
16924 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 417
16923 Secure Automatic Key SMS Encryption Scheme Using Hybrid Cryptosystem: An Approach for One Time Password Security Enhancement

Authors: Pratama R. Yunia, Firmansyah, I., Ariani, Ulfa R. Maharani, Fikri M. Al

Abstract:

Nowadays, notwithstanding that the role of SMS as a means of communication has been largely replaced by online applications such as WhatsApp, Telegram, and others, the fact that SMS is still used for certain and important communication needs is indisputable. Among them is for sending one time password (OTP) as an authentication media for various online applications ranging from chatting, shopping to online banking applications. However, the usage of SMS does not pretty much guarantee the security of transmitted messages. As a matter of fact, the transmitted messages between BTS is still in the form of plaintext, making it extremely vulnerable to eavesdropping, especially if the message is confidential, for instance, the OTP. One solution to overcome this problem is to use an SMS application which provides security services for each transmitted message. Responding to this problem, in this study, an automatic key SMS encryption scheme was designed as a means to secure SMS communication. The proposed scheme allows SMS sending, which is automatically encrypted with keys that are constantly changing (automatic key update), automatic key exchange, and automatic key generation. In terms of the security method, the proposed scheme applies cryptographic techniques with a hybrid cryptosystem mechanism. Proofing the proposed scheme, a client to client SMS encryption application was developed using Java platform with AES-256 as encryption algorithm, RSA-768 as public and private key generator and SHA-256 for message hashing function. The result of this study is a secure automatic key SMS encryption scheme using hybrid cryptosystem which can guarantee the security of every transmitted message, so as to become a reliable solution in sending confidential messages through SMS although it still has weaknesses in terms of processing time.

Keywords: encryption scheme, hybrid cryptosystem, one time password, SMS security

Procedia PDF Downloads 128
16922 A Comparative Assessment of Membrane Bioscrubber and Classical Bioscrubber for Biogas Purification

Authors: Ebrahim Tilahun, Erkan Sahinkaya, Bariş Calli̇

Abstract:

Raw biogas is a valuable renewable energy source however it usually needs removal of the impurities. The presence of hydrogen sulfide (H2S) in the biogas has detrimental corrosion effects on the cogeneration units. Removal of H2S from the biogas can therefore significantly improve the biogas quality. In this work, a conventional bioscrubber (CBS), and a dense membrane bioscrubber (DMBS) were comparatively evaluated in terms of H2S removal efficiency (RE), CH4 enrichment and alkaline consumption at gas residence times ranging from 5 to 20 min. Both bioscrubbers were fed with a synthetic biogas containing H2S (1%), CO2 (39%) and CH4 (60%). The results show that high RE (98%) was obtained in the DMBS when gas residence time was 20 min, whereas slightly lower CO2 RE was observed. While in CBS system the outlet H2S concentration was always lower than 250 ppmv, and its H2S RE remained higher than 98% regardless of the gas residence time, although the high alkaline consumption and frequent absorbent replacement limited its cost-effectiveness. The result also indicates that in DMBS when the gas residence time increased to 20 min, the CH4 content in the treated biogas enriched upto 80%. However, while operating the CBS unit the CH4 content of the raw biogas (60%) decreased by three fold. The lower CH4 content in CBS was probably caused by extreme dilution of biogas with air (N2 and O2). According to the results obtained here the DMBS system is a robust and effective biotechnology in comparison with CBS. Hence, DMBS has a better potential for real scale applications.

Keywords: biogas, bioscrubber, desulfurization, PDMS membrane

Procedia PDF Downloads 226
16921 Radio Frequency Identification (Rfid) Cost-Effective, Location-Based System for Managing Construction Materials

Authors: Mourad Bakouka, Abdelaziz Rabehi

Abstract:

Companies need to have logistics and transportation in place that can adapt to the changing nature of construction sites. This ensures they can react quickly when needed. A study was conducted to develop a way to locate and track materials on construction sites. The system is an RFID/GPS integration that's required to pull off this feat. The study also reports how the platform has been used in construction. They found many advantages to using it, including reductions in both time and costs as well as improved management of materials orders. . For example, the time in which a project could start up was shortened from two weeks to three days with just a single digital order. As of now, the technology is still limited in its widespread adoption due largely to overall lack of awareness and difficulty connecting to it. However, as more and more companies embrace it in construction, the technology is expected to become ubiquitous. The developed platform provides contractors and construction managers with real-time information about the status of materials and work, allowing them to better manage the workflow in a project. The study sheds new light on this subject, which is essential to know. This work is becoming increasingly aware of the use of smart tools in constructing buildings.

Keywords: materials management, internet of things (IoT), radio frequency identification (RFID), construction site, supply chain management

Procedia PDF Downloads 81
16920 Design and Evaluation of a Fully-Automated Fluidized Bed Dryer for Complete Drying of Paddy

Authors: R. J. Pontawe, R. C. Martinez, N. T. Asuncion, R. V. Villacorte

Abstract:

Drying of high moisture paddy remains a major problem in the Philippines, especially during inclement weather condition. To alleviate the problem, mechanical dryers were used like a flat bed and recirculating batch-type dryers. However, drying to 14% (wet basis) final moisture content is long which takes 10-12 hours and tedious which is not the ideal for handling high moisture paddy. Fully-automated pilot-scale fluidized bed drying system with 500 kilograms per hour capacity was evaluated using a high moisture paddy. The developed fluidized bed dryer was evaluated using four drying temperatures and two variations in fluidization time at a constant airflow, static pressure and tempering period. Complete drying of paddy with ≥28% (w.b.) initial MC was attained after 2 passes of fluidized-bed drying at 2 minutes exposure to 70 °C drying temperature and 4.9 m/s superficial air velocity, followed by 60 min ambient air tempering period (30 min without ventilation and 30 min with air ventilation) for a total drying time of 2.07 h. Around 82% from normal mechanical drying time was saved at 70 °C drying temperature. The drying cost was calculated to be P0.63 per kilogram of wet paddy. Specific heat energy consumption was only 2.84 MJ/kg of water removed. The Head Rice Yield recovery of the dried paddy passed the Philippine Agricultural Engineering Standards. Sensory evaluation showed that the color and taste of the samples dried in the fluidized bed dryer were comparable to air dried paddy. The optimum drying parameters of using fluidized bed dryer is 70 oC drying temperature at 2 min fluidization time, 4.9 m/s superficial air velocity, 10.16 cm grain depth and 60 min ambient air tempering period.

Keywords: drying, fluidized bed dryer, head rice yield, paddy

Procedia PDF Downloads 325
16919 Ultrasound Assisted Cooling Crystallization of Lactose Monohydrate

Authors: Sanjaykumar R. Patel, Parth R. Kayastha

Abstract:

α-lactose monohydrate is widely used in the pharmaceutical industries as an inactive substance that acts as a vehicle or a medium for a drug or other active substance. It is a byproduct of dairy industries, and the recovery of lactose from whey not only boosts the improvement of the economics of whey utilization but also causes a reduction in pollution as lactose recovery can reduce the BOD of whey by more than 80%. In the present study, levels of process parameters were kept as initial lactose concentration (30-50% w/w), sonication amplitude (20-40%), sonication time (2-6 hours), and crystallization temperature (10-20 oC) for the recovery of lactose in ultrasound assisted cooling crystallization. In comparison with cooling crystallization, the use of ultrasound enhanced the lactose recovery by 39.17% (w/w). The parameters were optimized for the lactose recovery using Taguchi Method. The optimum conditions found were initial lactose concentration at level 3 (50% w/w), amplitude of sonication at level 2 (40%), the sonication time at level 3 (6 hours), and crystallization temperature at level 1 (10 °C). The maximum recovery was found to be 85.85% at the optimum conditions. Sonication time and the initial lactose concentration were found to be significant parameters for the lactose recovery.

Keywords: crystallization, lactose, Taguchi method, ultrasound

Procedia PDF Downloads 212
16918 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks

Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu

Abstract:

The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).

Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding

Procedia PDF Downloads 91
16917 Crater Pattern on the Moon and Origin of the Moon

Authors: Xuguang Leng

Abstract:

The crater pattern on the Moon indicates the Moon was captured by Earth in the more recent years, disproves the theory that the Moon was born as a satellite to the Earth. The Moon was tidal locked since it became the satellite of the Earth. Moon’s near side is shielded by Earth from asteroid/comet collisions, with the center of the near side most protected. Yet the crater pattern on the Moon is fairly random, with no distinguishable empty spot/strip, no distinguishable difference near side vs. far side. Were the Moon born as Earth’s satellite, there would be a clear crater free spot, or strip should the tial lock shifts over time, on the near side; and far more craters on the far side. The nonexistence of even a vague crater free spot on the near side of the Moon indicates the capture was a more recent event. Given Earth’s much larger mass and sphere size over the Moon, Earth should have collided with asteroids and comets in much higher frequency, resulting in significant mass gain over the lifespan. Earth’s larger mass and magnetic field are better at retaining water and gas from solar wind’s stripping effect, thus accelerating the mass gain. A dwarf planet Moon can be pulled closer and closer to the Earth over time as Earth’s gravity grows stronger, eventually being captured as a satellite. Given enough time, it is possible Earth’s mass would be large enough to cause the Moon to collide with Earth.

Keywords: moon, origin, crater, pattern

Procedia PDF Downloads 97
16916 Ultrasound Markers in Evaluation of Hernias

Authors: Aniruddha Kulkarni

Abstract:

In very few cases of external hernias we require imaging modalities as on most occasions clinical examination tests are good enough. Ultrasound will help in chronic abdominal or groin pain, equivocal clinical results & complicated hernias. Ultrasound is useful in assessment of cause of raised intrabdominal pressure. In certain cases will comment about etiology, complications and chronicicty of lesion. Screening of rest of abdominal organs too is important advantage being real time modality. Cost effectiveness, no radiation allows modality be used repeatedly in indicated cases. Sonography is better accepted by patients too as it is cost effective. Best advanced tissue harmonic equipment and increasing expertise making it popular. Ultrasound can define surgical anatomy, rent size, contents, etiological /recurrence factors in great detail and with authority hence accidental findings in a planned surgical procedure can be easily avoided. Clinical dynamic valselva and reducibility test can better documented by real time ultrasound study. In case of recurrence, Sonography will help in assessing the hernia details better as being dynamic real time investigation. Ultrasound signs in case of internal hernias are well comparable with CT findings.

Keywords: laparoscopic repair, Hernia, CT findings, chronic pain

Procedia PDF Downloads 497
16915 Turning Parameters Affect Time up and Go Test Performance in Pre-Frail Community-Dwelling Elderly

Authors: Kuei-Yu Chien, Hsiu-Yu Chiu, Chia-Nan Chen, Shu-Chen Chen

Abstract:

Background: Frailty is associated with decreased physical performances that affect mobility of the elderly. Time up and go test (TUG) was the common method to evaluate mobility in the community. The purpose of this study was to compare the parameters in different stages of Time up and go test (TUG) and physical performance between pre-frail elderly (PFE) and non-frail elderly (NFE). We also investigated the relationship between TUG parameters and physical performance. Methods: Ninety-two community-dwelling older adults were as participants in this study. Based on Canadian Study of Health and Aging Clinical Frailty Scale, 22 older adults were classified as PFE (71.77 ± 6.05 yrs.) and 70 were classified as NFE (71.2 ± 5.02 yrs.). We performed body composition and physical performance, including balance, muscular strength/endurance, mobility, cardiorespiratory endurance, and flexibility. Results: Pre-frail elderly took significantly longer time than NFE in TUG test (p=.004). Pre-frail elderly had lower turning average angular velocity (p = .017), turning peak angular velocity (p = .041) and turning-stand to sit peak angular velocity (p = .037) than NFE. The turning related parameters related to open-eye stand on right foot, 30-second chair stand test, back scratch, and 2-min step tests. Conclusions: Turning average angular velocity, turning peak angular velocity and turning-stand to sit peak angular velocity mainly affected the TUG performance. We suggested that static/dynamic balance, agility, flexibility, and muscle strengthening of lower limbs exercise were important to PFE.

Keywords: mobility, aglity, active ageing, functional fitness

Procedia PDF Downloads 186
16914 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 492
16913 Effectiveness of ATMS (Advanced Transport Management Systems) in Asuncion, Paraguay

Authors: Sung Ho Oh

Abstract:

The advanced traffic lights, the system of traffic information collection and provision, the CCTVs for traffic control, and the traffic information center were installed in Asuncion, capital of Paraguay. After pre-post comparison of the installation, significant changes were found. Even though the traffic volumes were increased, travel speed was higher, so that travel time from origin to destination was decreased. the saving values for travel time, gas cost, and environmental cost are about 47 million US dollars per year. Satisfaction survey results for the installation were presented with statistical significance analysis.

Keywords: advanced transport management systems, effectiveness, Paraguay, traffic lights

Procedia PDF Downloads 352
16912 Undernutrition Among Children Below Five Years of Age in Uganda: A Deep Dive into Space and Time

Authors: Vallence Ngabo Maniragaba

Abstract:

This study aimed at examining the variations of undernutrition among children below 5 years of age in Uganda. The approach of spatial and spatiotemporal analysis helped in identifying cluster patterns, hot spots and emerging hot spots. Data from the 6 Uganda Demographic and Health Surveys spanning from 1990 to 2016 were used with the main outcome variable being undernutrition among children <5 years of age. All data that were relevant to this study were retrieved from the survey datasets and combined with the 214 shape files for the districts of Uganda to enable spatial and spatiotemporal analysis. Spatial maps with the spatial distribution of the prevalence of undernutrition, both in space and time, were generated using ArcGIS Pro version 2.8. Moran’s I, an index of spatial autocorrelation, rules out doubts of spatial randomness in order to identify spatially clustered patterns of hot or cold spot areas. Furthermore, space-time cubes were generated to establish the trend in undernutrition as well as to mirror its variations over time and across Uganda. Moreover, emerging hot spot analysis was done to help identify the patterns of undernutrition over time. The results indicate a heterogeneous distribution of undernutrition across Uganda and the same variations were also evident over time. Moran’s I index confirmed spatial clustered patterns as opposed to random distributions of undernutrition prevalence. Four hot spot areas, namely; the Karamoja, the Sebei, the West Nile and the Toro regions were significantly evident, most of the central parts of Uganda were identified as cold spot clusters, while most of Western Uganda, the Acholi and the Lango regions had no statistically significant spatial patterns by the year 2016. The spatio-temporal analysis identified the Karamoja and Sebei regions as clusters of persistent, consecutive and intensifying hot spots, West Nile region was identified as a sporadic hot spot area while the Toro region was identified with both sporadic and emerging hotspots. In conclusion, undernutrition is a silent pandemic that needs to be handled with both hands. At 31.2 percent, the prevalence is still very high and unpleasant. The distribution across the country is nonuniform with some areas such as the Karamoja, the West Nile, the Sebei and the Toro regions being epicenters of undernutrition in Uganda. Over time, the same areas have experienced and exhibited high undernutrition prevalence. Policymakers, as well as the implementers, should bear in mind the spatial variations across the country and prioritize hot spot areas in order to have efficient, timely and region-specific interventions.

Keywords: undernutrition, spatial autocorrelation, hotspots analysis, geographically weighted regressions, emerging hotspots analysis, under-fives, Uganda

Procedia PDF Downloads 86
16911 A Dynamical Approach for Relating Energy Consumption to Hybrid Inventory Level in the Supply Chain

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Due to long lead time, work in process (WIP) inventory can manifest within the supply chain of most manufacturing system. It implies that there are lesser finished good on hand and more in the process because the work remains in the factory too long and cannot be sold to either customers The supply chain of most manufacturing system is then considered as inefficient as it take so much time to produce the finished good. Time consumed in each operation of the supply chain has an associated energy costs. Such phenomena can be harmful for a hybrid inventory system because a lot of space to store these semi-finished goods may be needed and one is not sure about the final energy cost of producing, holding and delivering the good to customers. The principle that reduces waste of energy within the supply chain of most manufacturing firms should therefore be available to all inventory managers in pursuit of profitability. Decision making by inventory managers in this condition is a modeling process, whereby a dynamical approach is used to depict, examine, specify and even operationalize the relationship between energy consumption and hybrid inventory level. The relationship between energy consumption and inventory level is established, which indicates a poor level of control and hence a potential for energy savings.

Keywords: dynamic modelling, energy used, hybrid inventory, supply chain

Procedia PDF Downloads 268
16910 Optimal Allocation of Multiple Emergency Resources for a Single Potential Accident Node: A Mixed Integer Linear Program

Authors: Yongjian Du, Jinhua Sun, Kim M. Liew, Huahua Xiao

Abstract:

Optimal allocation of emergency resources before a disaster is of great importance for emergency response. In reality, the pre-protection for a single critical node where accidents may occur is common. In this study, a model is developed to determine location and inventory decisions of multiple emergency resources among a set of candidate stations to minimize the total cost based on the constraints of budgetary and capacity. The total cost includes the economic accident loss which is accorded with probability distribution of time and the warehousing cost of resources which is increasing over time. A ratio is set to measure the degree of a storage station only serving the target node that becomes larger with the decrease of the distance between them. For the application of linear program, it is assumed that the length of travel time to the accident scene of emergency resources has a linear relationship with the economic accident loss. A computational experiment is conducted to illustrate how the proposed model works, and the results indicate its effectiveness and practicability.

Keywords: emergency response, integer linear program, multiple emergency resources, pre-allocation decisions, single potential accident node

Procedia PDF Downloads 153
16909 One-Step Time Series Predictions with Recurrent Neural Networks

Authors: Vaidehi Iyer, Konstantin Borozdin

Abstract:

Time series prediction problems have many important practical applications, but are notoriously difficult for statistical modeling. Recently, machine learning methods have been attracted significant interest as a practical tool applied to a variety of problems, even though developments in this field tend to be semi-empirical. This paper explores application of Long Short Term Memory based Recurrent Neural Networks to the one-step prediction of time series for both trend and stochastic components. Two types of data are analyzed - daily stock prices, that are often considered to be a typical example of a random walk, - and weather patterns dominated by seasonal variations. Results from both analyses are compared, and reinforced learning framework is used to select more efficient between Recurrent Neural Networks and more traditional auto regression methods. It is shown that both methods are able to follow long-term trends and seasonal variations closely, but have difficulties with reproducing day-to-day variability. Future research directions and potential real world applications are briefly discussed.

Keywords: long short term memory, prediction methods, recurrent neural networks, reinforcement learning

Procedia PDF Downloads 229
16908 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust

Authors: Hassan Gholibeigian, Kazem Gholibeigian

Abstract:

In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.

Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour

Procedia PDF Downloads 276
16907 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn

Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh

Abstract:

It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.

Keywords: burn, degradation, postmortem interval, troponin-T

Procedia PDF Downloads 449
16906 Structural Health Monitoring and Damage Structural Identification Using Dynamic Response

Authors: Reza Behboodian

Abstract:

Monitoring the structural health and diagnosing their damage in the early stages has always been one of the topics of concern. Nowadays, research on structural damage detection methods based on vibration analysis is very extensive. Moreover, these methods can be used as methods of permanent and timely inspection of structures and prevent further damage to structures. Non-destructive methods are the low-cost and economical methods for determining the damage of structures. In this research, a non-destructive method for detecting and identifying the failure location in structures based on dynamic responses resulting from time history analysis is proposed. When the structure is damaged due to the reduction of stiffness, and due to the applied loads, the displacements in different parts of the structure were increased. In the proposed method, the damage position is determined based on the calculation of the strain energy difference in each member of the damaged structure and the healthy structure at any time. Defective members of the structure are indicated by the amount of strain energy relative to the healthy state. The results indicated that the proper accuracy and performance of the proposed method for identifying failure in structures.

Keywords: failure, time history analysis, dynamic response, strain energy

Procedia PDF Downloads 133
16905 Influence of the Compression Force and Powder Particle Size on Some Physical Properties of Date (Phoenix dactylifera) Tablets

Authors: Djemaa Megdoud, Messaoud Boudaa, Fatima Ouamrane, Salem Benamara

Abstract:

In recent years, the compression of date (Phoenix dactylifera L.) fruit powders (DP) to obtain date tablets (DT) has been suggested as a promising form of valorization of non commercial valuable date fruit (DF) varieties. To further improve and characterize DT, the present study aims to investigate the influence of the DP particle size and compression force on some physical properties of DT. The results show that independently of particle size, the hardness (y) of tablets increases with the increase of the compression force (x) following a logarithmic law (y = a ln (bx) where a and b are the constants of model). Further, a full factorial design (FFD) at two levels, applied to investigate the erosion %, reveals that the effects of time and particle size are the same in absolute value and they are beyond the effect of the compression. Regarding the disintegration time, the obtained results also by means of a FFD show that the effect of the compression force exceeds 4 times that of the DP particle size. As final stage, the color parameters in the CIELab system of DT immediately after their obtaining are differently influenced by the size of the initial powder.

Keywords: powder, tablets, date (Phoenix dactylifera L.), hardness, erosion, disintegration time, color

Procedia PDF Downloads 430
16904 The Lean Manufacturing Practices in an Automotive Company Using Value Stream Mapping Technique

Authors: Seher Arslankaya, Merve Si̇mge Usuk

Abstract:

Lean manufacturing, which is based on the Toyota Production System, has focused on increasing the performance in various fields by eliminating the waste. By waste elimination, the lead time is reduced significantly and lean manufacturing provides companies with an important privilege under today's competitive conditions. The initial point of lean thinking is the value. This notion create of a specific product with specific properties for which the customer is ready to pay and which satisfies his needs within a specific time frame and at a specific price. Considering this, the final customer determines the value but the manufacturer creates this value of the product. The value stream is the whole set of activities required for each product. These activities may or may not be essential for the value. Through value stream mapping, all employees can see the sources of waste and develop future cases to eliminate it. This study focused on manufacturing to eliminate the waste which created a cost but did not create any value. The study was carried out at the Department of Assembly/Logistics at Toyota Motor Manufacturing Turkey from the automotive industry with a high product mix and variable demands. As a result of the value stream analysis, improvements are planned for the future cases. The process was improved by applying these suggestions.

Keywords: lead time, lean manufacturing, performance improvement, value stream papping

Procedia PDF Downloads 311
16903 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 317
16902 Real-Time Kinetic Analysis of Labor-Intensive Repetitive Tasks Using Depth-Sensing Camera

Authors: Sudip Subedi, Nipesh Pradhananga

Abstract:

The musculoskeletal disorders, also known as MSDs, are common in construction workers. MSDs include lower back injuries, knee injuries, spinal injuries, and joint injuries, among others. Since most construction tasks are still manual, construction workers often need to perform repetitive, labor-intensive tasks. And they need to stay in the same or an awkward posture for an extended time while performing such tasks. It induces significant stress to the joints and spines, increasing the risk of getting into MSDs. Manual monitoring of such tasks is virtually impossible with the handful of safety managers in a construction site. This paper proposes a methodology for performing kinetic analysis of the working postures while performing such tasks in real-time. Skeletal of different workers will be tracked using a depth-sensing camera while performing the task to create training data for identifying the best posture. For this, the kinetic analysis will be performed using a human musculoskeletal model in an open-source software system (OpenSim) to visualize the stress induced by essential joints. The “safe posture” inducing lowest stress on essential joints will be computed for different actions involved in the task. The identified “safe posture” will serve as a basis for real-time monitoring and identification of awkward and unsafe postural behaviors of construction workers. Besides, the temporal simulation will be carried out to find the associated long-term effect of repetitive exposure to such observed postures. This will help to create awareness in workers about potential future health hazards and encourage them to work safely. Furthermore, the collected individual data can then be used to provide need-based personalized training to the construction workers.

Keywords: construction workers’ safety, depth sensing camera, human body kinetics, musculoskeletal disorders, real time monitoring, repetitive labor-intensive tasks

Procedia PDF Downloads 130
16901 Improving Fused Deposition Modeling Efficiency: A Parameter Optimization Approach

Authors: Wadea Ameen

Abstract:

Rapid prototyping (RP) technology, such as fused deposition modeling (FDM), is gaining popularity because it can produce functioning components with intricate geometric patterns in a reasonable amount of time. A multitude of process variables influences the quality of manufactured parts. In this study, four important process parameters such as layer thickness, model interior fill style, support fill style and orientation are considered. Their influence on three responses, such as build time, model material, and support material, is studied. Experiments are conducted based on factorial design, and the results are presented.

Keywords: fused deposition modeling, factorial design, optimization, 3D printing

Procedia PDF Downloads 21
16900 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm

Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene

Abstract:

Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.

Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest

Procedia PDF Downloads 118
16899 The Optical OFDM Equalization Based on the Fractional Fourier Transform

Authors: A. Cherifi, B. S. Bouazza, A. O. Dahman, B. Yagoubi

Abstract:

Transmission over Optical channels will introduce inter-symbol interference (ISI) as well as inter-channel (or inter-carrier) interference (ICI). To decrease the effects of ICI, this paper proposes equalizer for the Optical OFDM system based on the fractional Fourier transform (FrFFT). In this FrFT-OFDM system, traditional Fourier transform is replaced by fractional Fourier transform to modulate and demodulate the data symbols. The equalizer proposed consists of sampling the received signal in the different time per time symbol. Theoretical analysis and numerical simulation are discussed.

Keywords: OFDM, fractional fourier transform, internet and information technology

Procedia PDF Downloads 406