Search results for: exponentially weighted moving average (EWMA)
5659 Analysis of Real Time Seismic Signal Dataset Using Machine Learning
Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.
Abstract:
Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection
Procedia PDF Downloads 1275658 Treatment of Poultry Slaughterhouse Wastewater by Mesophilic Static Granular Bed Reactor (SGBR) Coupled with UF Membrane
Authors: Moses Basitere, Marshal Sherene Sheldon, Seteno Karabo Obed Ntwampe, Debbie Dejager
Abstract:
In South Africa, Poultry slaughterhouses consume largest amount of freshwater and discharges high strength wastewater, which can be treated successfully at low cost using anaerobic digesters. In this study, the performance of bench-scale mesophilic Static Granular Bed Reactor (SGBR) containing fully anaerobic granules coupled with ultra-filtration (UF) membrane as a post-treatment for poultry slaughterhouse wastewater was investigated. The poultry slaughterhouse was characterized by chemical oxygen demand (COD) range between 2000 and 6000 mg/l, average biological oxygen demand (BOD) of 2375 mg/l and average fats, oil and grease (FOG) of 554 mg/l. A continuous SGBR anaerobic reactor was operated for 6 weeks at different hydraulic retention time (HRT) and an Organic loading rate. The results showed an average COD removal was greater than 90% for both the SGBR anaerobic digester and ultrafiltration membrane. The total suspended solids and fats oil and grease (FOG) removal was greater than 95%. The SGBR reactor coupled with UF membrane showed a greater potential to treat poultry slaughterhouse wastewater.Keywords: chemical oxygen demand, poultry slaughterhouse wastewater, static granular bed reactor, ultrafiltration, wastewater
Procedia PDF Downloads 3875657 Piezoelectric Micro-generator Characterization for Energy Harvesting Application
Authors: José E. Q. Souza, Marcio Fontana, Antonio C. C. Lima
Abstract:
This paper presents analysis and characterization of a piezoelectric micro-generator for energy harvesting application. A low-cost experimental prototype was designed to operate as piezoelectric micro-generator in the laboratory. An input acceleration of 9.8m/s2 using a sine signal (peak-to-peak voltage: 1V, offset voltage: 0V) at frequencies ranging from 10Hz to 160Hz generated a maximum average power of 432.4μW (linear mass position = 25mm) and an average power of 543.3μW (angular mass position = 35°). These promising results show that the prototype can be considered for low consumption load application as an energy harvesting micro-generator.Keywords: piezoelectric, micro-generator, energy harvesting, cantilever beam
Procedia PDF Downloads 4675656 Digital Twin Strategies and Technologies for Modern Supply Chains
Authors: Mayank Sharma, Anubhaw Kumar, Siddharth Desai, Ankit Tomar
Abstract:
With the advent of cost-effective hardware and communication technologies, the scope of digitalising operations within a supply chain has tremendously increased. This has provided the opportunity to create digital twins of entire supply chains through the use of Internet-of-Things (IoT) and communication technologies. Adverse events like the COVID-19 pandemic and unpredictable geo-political situations have further warranted the importance of digitalization and remote operability of day-to-day operations at critical nodes. Globalisation, rising consumerism & e-commerce has exponentially increased the complexities of existing supply chains. We discuss here a scalable, future-ready and inclusive framework for creating digital twins developed along with the industry leaders from Cisco, Bosch, Accenture, Intel, Deloitte & IBM. We have proposed field-tested key technologies and frameworks required for creating digital twins. We also present case studies of real-life stable deployments done by us in the supply chains of a few marquee industry leaders.Keywords: internet-of-things, digital twins, smart factory, industry 4.0, smart manufacturing
Procedia PDF Downloads 965655 Human Health Risks Assessment of Particulate Air Pollution in Romania
Authors: Katalin Bodor, Zsolt Bodor, Robert Szep
Abstract:
The particulate matter (PM) smaller than 2.5 μm are less studied due to the limited availability of PM₂.₅, and less information is available on the health effects attributable to PM₁₀ in Central-Eastern Europe. The objective of the current study was to assess the human health risk and characterize the spatial and temporal variation of PM₂.₅ and PM₁₀ in eight Romanian regions between the 2009-2018 and. The PM concentrations showed high variability over time and spatial distribution. The highest concentration was detected in the Bucharest region in the winter period, and the lowest was detected in West. The relative risk caused by the PM₁₀ for all-cause mortality varied between 1.017 (B) and 1.025 (W), with an average 1.020. The results demonstrate a positive relative risk of cardiopulmonary and lung cancer disease due to exposure to PM₂.₅ on the national average 1.26 ( ± 0.023) and 1.42 ( ± 0.037), respectively.Keywords: PM₂.₅, PM₁₀, relative risk, health effect
Procedia PDF Downloads 1625654 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students
Authors: Samah Senbel
Abstract:
Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.Keywords: computer science education, database design, graduate and undergraduate students, pedagogy
Procedia PDF Downloads 1235653 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data
Authors: Devin Simmons
Abstract:
At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.Keywords: ferry vessels, transportation, modeling, AIS data
Procedia PDF Downloads 1785652 Batteryless DCM Boost Converter for Kinetic Energy Harvesting Applications
Authors: Andrés Gomez-Casseres, Rubén Contreras
Abstract:
In this paper, a bidirectional boost converter operated in Discontinuous Conduction Mode (DCM) is presented as a suitable power conditioning circuit for tuning of kinetic energy harvesters without the need of a battery. A nonlinear control scheme, composed by two linear controllers, is used to control the average value of the input current, enabling the synthesization of complex loads. The converter, along with the control system, is validated through SPICE simulations using the LTspice tool. The converter model and the controller transfer functions are derived. From the simulation results, it was found that the input current distortion increases with the introduced phase shift and that, such distortion, is almost entirely present at the zero-crossing point of the input voltage.Keywords: average current control, boost converter, electrical tuning, energy harvesting
Procedia PDF Downloads 7635651 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 935650 A Performance Analysis of Different Scheduling Schemes in WiMAX
Authors: A. Youseef
Abstract:
One of the most aims of IEEE 802.16 (WiMAX) is to present high-speed wireless access to cover wide range coverage. The base station (BS) and the subscriber station (SS) are the main parts of WiMAX. WiMAX uses either Point-to-Multipoint (PMP) or mesh topologies. In the PMP mode, the SSs connect to the BS to gain access to the network. However, in the mesh mode, the SSs connect to each other to gain access to the BS. The main components of QoS management in the 802.16 standard are the admission control, buffer management, and packet scheduling. There are several researches proposed to create an efficient packet scheduling schemes. Therefore, we use QualNet 5.0.2 to study the performance of different scheduling schemes, such as WFQ, SCFQ, RR, and SP when the numbers of SSs increase. We find that when the number of SSs increases, the average jitter and average end-to-end delay is increased and the throughput is reduced.Keywords: WiMAX, scheduling scheme, QoS, QualNet
Procedia PDF Downloads 4565649 Macroeconomic Effects and Dynamics of Natural Disaster Damages: Evidence from SETX on the Resiliency Hypothesis
Authors: Agim Kukelii, Gevorg Sargsyan
Abstract:
This study, focusing on the base regional area (county level), estimates the effect of natural disaster damages on aggregate personal income, aggregate wages, wages per worker, aggregate employment, and aggregate income transfer. The study further estimates the dynamics of personal income, employment, and wages under natural disaster shocks. Southeast Texas, located at the center of Golf Coast, is hit by meteorological and hydrological caused natural disasters yearly. On average, there are more than four natural disasters per year that cane an estimated damage average of 2.2% of real personal income. The study uses the panel data method to estimate the average effect of natural disasters on the area’s economy (personal income, wages, employment, and income transfer). It also uses Panel Vector Autoregressive (PVAR) model to study the dynamics of macroeconomic variables under natural disaster shocks. The study finds that the average effect of natural disasters is positive for personal income and income transfer and is negative for wages and employment. The PVAR and the impulse response function estimates reveal that natural disaster shocks cause a decrease in personal income, employment, and wages. However, the economy’s variables bounce back after three years. The novelty of this study rests on several aspects. First, this is the first study to investigate the effects of natural disasters on macroeconomic variables at a regional level. Second, the study uses direct measures of natural disaster damages. Third, the study estimates that the time that the local economy takes to absorb the natural disaster damages shocks is three years. This is a relatively good reaction to the local economy, therefore, adding to the “resiliency” hypothesis. The study has several implications for policymakers, businesses, and households. First, this study serves to increase the awareness of local stakeholders that natural disaster damages do worsen, macroeconomic variables, such as personal income, employment, and wages beyond the immediate damages to residential and commercial properties, physical infrastructure, and discomfort in daily lives. Second, the study estimates that these effects linger on the economy on average for three years, which would require policymakers to factor in the time area need to be on focus.Keywords: natural disaster damages, macroeconomics effects, PVAR, panel data
Procedia PDF Downloads 895648 Study of Climate Change Process on Hyrcanian Forests Using Dendroclimatology Indicators (Case Study of Guilan Province)
Authors: Farzad Shirzad, Bohlol Alijani, Mehry Akbary, Mohammad Saligheh
Abstract:
Climate change and global warming are very important issues today. The process of climate change, especially changes in temperature and precipitation, is the most important issue in the environmental sciences. Climate change means changing the averages in the long run. Iran is located in arid and semi-arid regions due to its proximity to the equator and its location in the subtropical high pressure zone. In this respect, the Hyrcanian forest is a green necklace between the Caspian Sea and the south of the Alborz mountain range. In the forty-third session of UNESCO, it was registered as the second natural heritage of Iran. Beech is one of the most important tree species and the most industrial species of Hyrcanian forests. In this research, using dendroclimatology, the width of the tree ring, and climatic data of temperature and precipitation from Shanderman meteorological station located in the study area, And non-parametric Mann-Kendall statistical method to investigate the trend of climate change over a time series of 202 years of growth ringsAnd Pearson statistical method was used to correlate the growth of "ring" growth rings of beech trees with climatic variables in the region. The results obtained from the time series of beech growth rings showed that the changes in beech growth rings had a downward and negative trend and were significant at the level of 5% and climate change occurred. The average minimum, medium, and maximum temperatures and evaporation in the growing season had an increasing trend, and the annual precipitation had a decreasing trend. Using Pearson method during fitting the correlation of diameter of growth rings with temperature, for the average in July, August, and September, the correlation is negative, and the average temperature in July, August, and September is negative, and for the average The average maximum temperature in February was correlation-positive and at the level of 95% was significant, and with precipitation, in June the correlation was at the level of 95% positive and significant.Keywords: climate change, dendroclimatology, hyrcanian forest, beech
Procedia PDF Downloads 1045647 Evaluation of Aquifer Protective Capacity and Soil Corrosivity Using Geoelectrical Method
Authors: M. T. Tsepav, Y. Adamu, M. A. Umar
Abstract:
A geoelectric survey was carried out in some parts of Angwan Gwari, an outskirt of Lapai Local Government Area on Niger State which belongs to the Nigerian Basement Complex, with the aim of evaluating the soil corrosivity, aquifer transmissivity and protective capacity of the area from which aquifer characterisation was made. The G41 Resistivity Meter was employed to obtain fifteen Schlumberger Vertical Electrical Sounding data along profiles in a square grid network. The data were processed using interpex 1-D sounding inversion software, which gives vertical electrical sounding curves with layered model comprising of the apparent resistivities, overburden thicknesses and depth. This information was used to evaluate longitudinal conductance and transmissivities of the layers. The results show generally low resistivities across the survey area and an average longitudinal conductance variation from 0.0237Siemens in VES 6 to 0.1261 Siemens in VES 15 with almost the entire area giving values less than 1.0 Siemens. The average transmissivity values range from 96.45 Ω.m2 in VES 4 to 299070 Ω.m2 in VES 1. All but VES 4 and VES14 had an average overburden greater than 400 Ω.m2, these results suggest that the aquifers are highly permeable to fluid movement within, leading to the possibility of enhanced migration and circulation of contaminants in the groundwater system and that the area is generally corrosive.Keywords: geoelectric survey, corrosivity, protective capacity, transmissivity
Procedia PDF Downloads 3395646 Flexural Response of Glass Fiber Reinforced Polymer Sandwich Panels with 3D Woven Honeycomb Core
Authors: Elif Kalkanli, Constantinos Soutis
Abstract:
The use of textile preform in the advanced fields including aerospace, automotive and marine has exponentially grown in recent years. These preforms offer excellent advantages such as being lightweight and low-cost, and also, their suitability for creating different fiber architectures with different materials whilst improved mechanical properties in certain aspects. In this study, a novel honeycomb core is developed by a 3Dweaving process. The assembly of the layers is achieved thanks to innovative weaving design. Polyester yarn is selected for the 3D woven honeycomb core (3DWHC). The core is used to manufacture a sandwich panel with 2x2 twill glass fiber composite face sheets. These 3DWHC sandwich panels will be tested in three-point bending. The in-plane and out-of-plane (through-the-thickness) mechanical response of the core will be examined as a function of cell size in addition to the flexural response of the sandwich panel. The failure mechanisms of the core and the sandwich skins will be reported in addition to flexural strength and stiffness. Possible engineering applications will be identified.Keywords: 3D woven, assembly, failure modes, honeycomb sandwich panel
Procedia PDF Downloads 2065645 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 215644 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado
Authors: Masahiro Suzuki, Nobuyuki Okura
Abstract:
The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.Keywords: aerodynamic force, experimental method, tornado, train
Procedia PDF Downloads 2375643 Usage of Channel Coding Techniques for Peak-to-Average Power Ratio Reduction in Visible Light Communications Systems
Authors: P. L. D. N. M. de Silva, S. G. Edirisinghe, R. Weerasuriya
Abstract:
High peak-to-average power ratio (PAPR) is a concern of orthogonal frequency division multiplexing (OFDM) based visible light communication (VLC) systems. Discrete Fourier Transform spread (DFT-s) OFDM is an alternative single carrier modulation scheme which would address this concern. Employing channel coding techniques is another mechanism to reduce the PAPR. Previous research has been conducted to study the impact of these techniques separately. However, to the best of the knowledge of the authors, no study has been done so far to identify the improvement which can be harnessed by hybridizing these two techniques for VLC systems. Therefore, this is a novel study area under this research. In addition, channel coding techniques such as Polar codes and Turbo codes have been tested in the VLC domain. However, other efficient techniques such as Hamming coding and Convolutional coding have not been studied too. Therefore, the authors present the impact of the hybrid of DFT-s OFDM and Channel coding (Hamming coding and Convolutional coding) on PAPR in VLC systems using Matlab simulations.Keywords: convolutional coding, discrete Fourier transform spread orthogonal frequency division multiplexing, hamming coding, peak-to-average power ratio, visible light communications
Procedia PDF Downloads 1545642 Evidence of Climate Change from Statistical Analysis of Temperature and Rainfall Data of Kaduna State, Nigeria
Authors: Iliya Bitrus Abaje
Abstract:
This study examines the evidence of climate change scenario in Kaduna State from the analysis of temperature and rainfall data (1976-2015) from three meteorological stations along a geographic transect from the southern part to the northern part of the State. Different statistical methods were used in determining the changes in both the temperature and rainfall series. The result of the linear trend lines revealed a mean increase in average temperature of 0.73oC for the 40 years period of study in the State. The plotted standard deviation for the temperature anomalies generally revealed that years of temperatures above the mean standard deviation (hotter than the normal conditions) in the last two decades (1996-2005 and 2006-2015) were more than those below (colder than the normal condition). The Cramer’s test and student’s t-test generally revealed an increasing temperature trend in the recent decades. The increased in temperature is an evidence that the earth’s atmosphere is getting warmer in recent years. The linear trend line equation of the annual rainfall for the period of study showed a mean increase of 316.25 mm for the State. Findings also revealed that the plotted standard deviation for the rainfall anomalies, and the 10-year non-overlapping and 30-year overlapping sub-periods analysis in all the three stations generally showed an increasing trend from the beginning of the data to the recent years. This is an evidence that the study area is now experiencing wetter conditions in recent years and hence climate change. The study recommends diversification of the economic base of the populace with emphasis on moving away from activities that are sensitive to temperature and rainfall extremes Also, appropriate strategies to ameliorate the scourge of climate change at all levels/sectors should always take into account the recent changes in temperature and rainfall amount in the area.Keywords: anomalies, linear trend, rainfall, temperature
Procedia PDF Downloads 3205641 An Attempt of Cost Analysis of Heart Failure Patients at Cardiology Department at Kasr Al Aini Hospitals: A Micro-Costing Study from Social Perspective
Authors: Eman Elsebaie, A. Sedrak, R. Ziada
Abstract:
Introduction: In the recent decades, heart failure (HF) has become one of the most prevalent cardio-vascular disease (CVDs), especially in the elderly and the main cause of hospitalization in Egypt cardiology departments. By 2030, the prevalence of HF is expected to increase by 25%. Total direct costs will increase to $818 billion, and the total indirect cost in terms of lost productivity is close to $275 billion. The current study was conducted to estimate the economic costs of services delivered for heart failure patients at the cardiology department in Cairo University Hospitals (CUHs). Aim: To gain an understanding of the cost of heart failure disease and its main drivers aiming to minimize associated health care costs. Subjects and Methods: Economic cost analysis study was conducted for a prospective group of all cases of HF admitted to the cardiology department in CUHs from end of March till end of April 2016 and another retrospective randomized sample from patients with HF, during the first 3 months of 2016 to measure estimated average cost per patient per day. Results: The mean age of the prospective group was 48.6 ± 17.16 years versus 52.3 ± 11.5 years for the retrospective group. The median (IQR) of Length of stay was 15 (15) days in the prospective group versus 9 (16) days in the retrospective group. The average HF inpatient cost/day in the cardiology department during April 2016 was 362.32 (255.5) L.E. versus 391.2(255.9) L.E. during January and February 2016. Conclusion: Up to 70% of expenditure in the management of HF is related to hospital admission. The average cost of such an admission was 5540.03 (IQR=7507.8) L.E. and 4687.4 (IQR=7818.8) L.E. with the average cost per day estimated at 362.32 (IQR=255.5) L.E. and 386.2(IQR=255.9) L.E. in prospective and retrospective groups respectively.Keywords: health care cost, heart failure, hospitalization, inpatient
Procedia PDF Downloads 2425640 Determinants of Probability Weighting and Probability Neglect: An Experimental Study of the Role of Emotions, Risk Perception, and Personality in Flood Insurance Demand
Authors: Peter J. Robinson, W. J. Wouter Botzen
Abstract:
Individuals often over-weight low probabilities and under-weight moderate to high probabilities, however very low probabilities are either significantly over-weighted or neglected. Little is known about factors affecting probability weighting in Prospect Theory related to emotions specific to risk (anticipatory and anticipated emotions), the threshold of concern, as well as personality traits like locus of control. This study provides these insights by examining factors that influence probability weighting in the context of flood insurance demand in an economic experiment. In particular, we focus on determinants of flood probability neglect to provide recommendations for improved risk management. In addition, results obtained using real incentives and no performance-based payments are compared in the experiment with high experimental outcomes. Based on data collected from 1’041 Dutch homeowners, we find that: flood probability neglect is related to anticipated regret, worry and the threshold of concern. Moreover, locus of control and regret affect probabilistic pessimism. Nevertheless, we do not observe strong evidence that incentives influence flood probability neglect nor probability weighting. The results show that low, moderate and high flood probabilities are under-weighted, which is related to framing in the flooding context and the degree of realism respondents attach to high probability property damages. We suggest several policies to overcome psychological factors related to under-weighting flood probabilities to improve flood preparations. These include policies that promote better risk communication to enhance insurance decisions for individuals with a high threshold of concern, and education and information provision to change the behaviour of internal locus of control types as well as people who see insurance as an investment. Multi-year flood insurance may also prevent short-sighted behaviour of people who have a tendency to regret paying for insurance. Moreover, bundling low-probability/high-impact risks with more immediate risks may achieve an overall covered risk which is less likely to be judged as falling below thresholds of concern. These measures could aid the development of a flood insurance market in the Netherlands for which we find to be demand.Keywords: flood insurance demand, prospect theory, risk perceptions, risk preferences
Procedia PDF Downloads 2765639 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 3435638 Mathematics Anxiety among Male and Female Students
Authors: Wern Lin Yeo, Choo Kim Tan, Sook Ling Lew
Abstract:
Mathematics anxiety refers to the feeling of anxious when one having difficulties in solving mathematical problem. Mathematics anxiety is the most common type of anxiety among other types of anxiety which occurs among the students. However, level of anxiety among males and females are different. There were few past study were conducted to determine the relationship of anxiety and gender but there were still did not have an exact results. Hence, the purpose of this study is to determine the relationship of anxiety level between male and female undergraduates at a private university in Malaysia. Convenient sampling method used in this study in which the students were selected based on the grouping assigned by the faculty. There were 214 undergraduates who registered the probability courses had participated in this study. Mathematics Anxiety Rating Scale (MARS) was the instrument used in study which used to determine students’ anxiety level towards probability. Reliability and validity of instrument was done before the major study was conducted. In the major study, students were given briefing about the study conducted. Participation of this study were voluntary. Students were given consent form to determine whether they agree to participate in the study. Duration of two weeks were given for students to complete the given online questionnaire. The data collected will be analyzed using Statistical Package for the Social Sciences (SPSS) to determine the level of anxiety. There were three anxiety level, i.e., low, average and high. Students’ anxiety level were determined based on their scores obtained compared with the mean and standard deviation. If the scores obtained were below mean and standard deviation, the anxiety level was low. If the scores were at below and above the mean and between one standard deviation, the anxiety level was average. If the scores were above the mean and greater than one standard deviation, the anxiety level was high. Results showed that both of the gender were having average anxiety level. Males having high frequency of three anxiety level which were low, average and high anxiety level as compared to females. Hence, the mean values obtained for males (M = 3.62) was higher than females (M = 3.42). In order to be significant of anxiety level among the gender, the p-value should be less than .05. The p-value obtained in this study was .117. However, this value was greater than .05. Thus, there was no significant difference of anxiety level among the gender. In other words, there was no relationship of anxiety level with the gender.Keywords: anxiety level, gender, mathematics anxiety, probability and statistics
Procedia PDF Downloads 2915637 Short-Term Effects of Extreme Temperatures on Cause Specific Cardiovascular Admissions in Beijing, China
Authors: Deginet Aklilu, Tianqi Wang, Endwoke Amsalu, Wei Feng, Zhiwei Li, Xia Li, Lixin Tao, Yanxia Luo, Moning Guo, Xiangtong Liu, Xiuhua Guo
Abstract:
Extreme temperature-related cardiovascular diseases (CVDs) have become a growing public health concern. However, the impact of temperature on the cause of specific CVDs has not been well studied in the study area. The objective of this study was to assess the impact of temperature on cause-specific cardiovascular hospital admissions in Beijing, China. We obtained data from 172 large general hospitals from the Beijing Public Health Information Center Cardiovascular Case Database and China. Meteorological Administration covering 16 districts in Beijing from 2013 to 2017. We used a time-stratified case crossover design with a distributed lag nonlinear model (DLNM) to derive the impact of temperature on CVD in hospitals back to 27 days on CVD admissions. The temperature data were stratified as cold (extreme and moderate ) and hot (moderate and extreme ). Within five years (January 2013-December 2017), a total of 460,938 (male 54.9% and female 45.1%) CVD admission cases were reported. The exposure-response relationship for hospitalization was described by a "J" shape for the total and cause-specific. An increase in the six-day moving average temperature from moderate hot (30.2 °C) to extreme hot (36.9 °C) resulted in a significant increase in CVD admissions of 16.1%(95% CI = 12.8%-28.9%). However, the effect of cold temperature exposure on CVD admissions over a lag time of 0-27 days was found to be non significant, with a relative risk of 0.45 (95% CI = 0.378-0.55) for extreme cold (-8.5 °C)and 0.53 (95% CI = 0.47-0.60) for moderate cold (-5.6 °C). The results of this study indicate that exposure to extremely high temperatures is highly associated with an increase in cause-specific CVD admissions. These finding may guide to create and raise awareness of the general population, government and private sectors regarding on the effects of current weather conditions on CVD.Keywords: admission, Beijing, cardiovascular diseases, distributed lag non linear model, temperature
Procedia PDF Downloads 655636 Semirings of Graphs: An Approach Towards the Algebra of Graphs
Authors: Gete Umbrey, Saifur Rahman
Abstract:
Graphs are found to be most capable in computing, and its abstract structures have been applied in some specific computations and algorithms like in phase encoding controller, processor microcontroller, and synthesis of a CMOS switching network, etc. Being motivated by these works, we develop an independent approach to study semiring structures and various properties by defining the binary operations which in fact, seems analogous to an existing definition in some sense but with a different approach. This work emphasizes specifically on the construction of semigroup and semiring structures on the set of undirected graphs, and their properties are investigated therein. It is expected that the investigation done here may have some interesting applications in theoretical computer science, networking and decision making, and also on joining of two network systems.Keywords: graphs, join and union of graphs, semiring, weighted graphs
Procedia PDF Downloads 1495635 Cognitive Stereotype Behaviors and Their Imprinting on the Individuals with Autism
Authors: Li-Ju Chen, Hsiang-Lin Chan, Hsin-Yi Kathy Cheng, Hui-Ju Chen
Abstract:
Stereotype behavior is one of the maladaptive syndromes of the individuals with autism. Most of the previous researches focused on the stereotype behavior with stimulating type, while less on the stereotype behavior about cognition (This research names it cognitive stereotype behavior; CSB). This research explored CSB and the rationality to explain CSB with imprinting phenomenon. After excluding the samples without CSB described, the data that came from 271 individuals with autism were recruited and analyzed with quantitative and qualitative analyses. This research discovers that : (1) Most of the individuals with autism originally came out CSB at 3 years old and more than a half of them appeared before 4 years old; The average age which firstly came out CSB was 6.10 years old, the average time insisting or ossifying CSB was 31.71 minutes each time and the average longest time which they last was 358.35 minutes (5.97 hours). (2) CSB demonstrates various aspects, this research classified them into 4 fields with 26 categories. They were categorized into sudden CSB or habitual CSB by imprinting performance. (3) Most of the autism commented that their CSBs were not necessary but they could not control them well. One-third of them appeared CSB suddenly and the first occurrence accompanied a strong emotional or behavioral response. (4) Whether respondent is the person with autism himself/herself or not was the critical element: on the awareness of the severity degree, disturbance degree, and the emotional /behavioral intensity at the first-time CSB happened. This study concludes imprinting could reasonably explain the phenomenon CSB forms. There are implications leading the individuals with autism and their family to develop coping strategies to promote individuals with autism having a better learning accomplishment and life quality in their future.Keywords: autism, cognitive stereotype behavior, constructivism, imprinting, stereotype
Procedia PDF Downloads 1315634 The Human Resources Management for the Temple in Northeastern Thailand
Authors: Routsukol Sunalai
Abstract:
This research purpose is to study and compare the administration of Buddhist monks at northeastern Thailand. The samples used in the study are the priest in the Northeast by simple random sampling for 190 sampling. The tools used in this study is questioner were created in the 40 question items. The statistics used for data analysis were percentage, average, and standard deviation. The research found that the human resources management for the Buddhist monks as a whole is moderate. But it was found that the highest average is the policy followed by the management information. The Buddhist monks aged less than 25 years old with the overall difference was not significant. The priests who are less than 10 years in the monk experience and the priest has long held in the position for 10 years are not different in the significant level.Keywords: employee job-related outcomes, ethical institutionalization, quality of work life, stock exchange of Thailand
Procedia PDF Downloads 2105633 Preparedness of the Mae Hong Son Province for the Aging Society
Authors: Siwaporn Mahathamnuchock, Krit Phanpanya
Abstract:
This survey study aims 1) to investigate the preparation of Mae Hong Son people for entering into the aging society 2) to study awareness of public health preparedness for the aging society of Mae Hong Son Province Administrative Organization. The samples used in this study were people aged 55-60 years in Mae Hong Province. Located at Khun Yuam Sub district, Khun Yuam District, Pang Ma Pha Sub district, Pang Ma Pha District, Thung Yao Sub district, Pai District, Mae ka Tuan Sub district, Sob Moei District, Mae Sariang Sub district, Mae Sariang District, Mae Tho Sub district, Mae La Noi District. And Huai Pha Sub district, Muang Mae Hong District. The data were collected from 1,088 people by Stratified sampling Method. The instrument used in this study were 36 items of questionnaire that contains three parts: 1) Sample’s general information 2) The Interview of Mae Hong Son people’s preparation before entering aging society. 3) The Interview about preparedness of health for the aging society of Mae Hong Son Province Administrative Organization. Then analyzed the data by using percentage and standard deviation. The research found that Mae Hong Son people are preparing for an aging society as followed; psychological, residence, physical health, careers and leisure time on a large scale with an average of 3.81 (SD=0.88), 3.66 (SD=0.99), 3.53(SD=1.04) and 3.51(SD=0.89), respectively. However finances and saving were prepared on moderate scale with an average of 2.84(SD=0.89) and in the awareness of public health preparedness for the aging society of Mae Hong Son Province Administrative Organization were moderate with an average of 2.99 (SD=1.07).Keywords: aging society, preparedness, perception, Mae Hong Son province
Procedia PDF Downloads 4155632 Marital Status and Happiness among Employed People in Thailand
Authors: Sirinan Kittisuksathit, Wannee Hutaphat
Abstract:
This paper investigates employed people in relation to family happiness, work-life balance, and individual happiness. The employed people in this study are categorized by their marital statuses namely, single, married and living together, married and living apart, cohabitation, and divorced. The 13,906 sample of employed people collected in 2015 by using the Self-Administered Questionnaire. The analysis utilizes ANOVA to analyze the differences between group means and their associated procedures. The findings show that two types of employed people are more likely to obtain the highest average happiness scores: married and living together, and cohabitation. The two groups are subsequently followed by single employed people, and divorced employed people. The lowest average happiness scores were achieved by employed people who are married and living apart.Keywords: employed people, happiness, marital status, Thailand
Procedia PDF Downloads 2555631 Migration in Times of Uncertainty
Authors: Harman Jaggi, David Steinsaltz, Shripad Tuljapurkar
Abstract:
Understanding the effect of fluctuations on populations is crucial in the context of increasing habitat fragmentation, climate change, and biological invasions, among others. Migration in response to environmental disturbances enables populations to escape unfavorable conditions, benefit from new environments and thereby ride out fluctuations in variable environments. Would populations disperse if there is no uncertainty? Karlin showed in 1982 that when sub-populations experience distinct but fixed growth rates at different sites, greater mixing of populations will lower the overall growth rate relative to the most favorable site. Here we ask if and when environmental variability favors migration over no-migration. Specifically, in random environments, would a small amount of migration increase the overall long-run growth rate relative to the zero migration case? We use analysis and simulations to show how long-run growth rate changes with migration rate. Our results show that when fitness (dis)advantages fluctuate over time across sites, migration may allow populations to benefit from variability. When there is one best site with highest growth rate, the effect of migration on long-run growth rate depends on the difference in expected growth between sites, scaled by the variance of the difference. When variance is large, there is a substantial probability of an inferior site experiencing higher growth rate than its average. Thus, a high variance can compensate for a difference in average growth rates between sites. Positive correlations in growth rates across sites favor less migration. With multiple sites and large fluctuations, the length of shortest cycle (excursion) from the best site (on average) matters, and we explore the interplay between excursion length, average differences between sites and the size of fluctuations. Our findings have implications for conservation biology: even when there are superior sites in a sea of poor habitats, variability and habitat quality across space may be key to determining the importance of migration.Keywords: migration, variable-environments, random, dispersal, fluctuations, habitat-quality
Procedia PDF Downloads 1395630 Moving Oman’s Economy to Knowledge-Based Economy: A Study on the Role of SMEs from the Perspective of Experts
Authors: Hanin Suleiman Alqam
Abstract:
The knowledge-based economy, as its name implies relies on knowledge, information and high levels of skills made available for all economic agents. Delving a bit more deeply, the concept of a knowledge-based economy is showcasing four main pillars, which are: Education and Training, Information and Communication Technology, Economic incentives and Institutional regimes, and Research and Development (R&D) and Innovation system. A good number of researches are showing its positive contribution to economic diversification underpinning sustainable development and growth. The present paper aimed at assessing the role of SMEs in moving Oman’s economy from a traditional economy to a knowledge-based economy. To lay down a groundwork that should lead to future studies, the methodology selected is based on exploratory research. Hence, the interview was conducted as a data collection tool. Based on a purposive sampling technique, seven handpicked experts have partaken in the study as they are working in different key organizations considered to be directly or indirectly the backbone of the Omani national economy. A thematic approach is employed for the purpose of data analysis. Results of the study showed that SMEs are not really contributing in the knowledge-based economy due to a lack of awareness about its importance to the country and to the enterprise within SMEs in Oman. However, it was shown that SMEs owners are interested in innovation and are trying to support innovative individuals by attracting them to their enterprises. On the other hand, the results revealed that SMEs' performance in e-solution is still not up to the level as 32% of SMEs only are using e-solutions in their internal processes and procedures like accounting systems. It is recommended to SMEs owners to use new and modern technologies in marketing and customer relation, encourage creativity, research and development, and allow the youth to have opportunities and facilitate the procedure in terms of innovation so that their role in contributing to the knowledge-based economy could be improved.Keywords: knowledge-based economy, SMEs, ICT pillars, research and innovation
Procedia PDF Downloads 157