Search results for: error level
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14135

Search results for: error level

13385 The Presence of Ochratoxin a in Breast-Milk, Urine and Serum of Lactating Women

Authors: Magdalena Twaruzek, Karolina Ropejko

Abstract:

Mycotoxins are secondary metabolites of molds. Ochratoxin A (OTA) is the most common in the Polish climate. It is produced by fungi of the genera Aspergillus and Penicillium. It is produced as a result of improper food storage. It is present in many products that are consumed both by humans and animals: cereals, wheat gluten, coffee, dried fruit, wine, grape juice, spices, beer, and products based on them. OTA is nephrotoxic, hepatotoxic, potentially carcinogenic, and teratogenic. OTA mainly enters an organism by oral intake. The aim of the study was to detect the presence of OTA in milk, urine, and serum of lactating women. A survey was also conducted regarding the daily diet of women. The research group consisted of 32 lactating women (11 were the donors from the Milk Bank in Toruń, the other 21 were recruited for this study). Results of the analysis showed the occurrence of OTA only in 3 milk samples (9.38%). The minimum level was 0.01 ng/ml, while the maximum 0.018 ng/ml and the mean 0.0013 ng/ml. Twenty-six urine samples (81.25%) were OTA positive, with minimum level 0.013 ng/ml, maximum level 0.117 ng/ml and mean 0.0192 ng/ml. Also, all 32 serum samples (100%) were contaminated by OTA, with a minimum level of 0.099 ng/ml, a maximum level of 2.38 ng/ml, and a mean of 0.4649 ng/ml. In the case of 3 women, OTA was present in all tested body fluids. Based on the results, the following conclusions can be drawn: the breast-milk of women in the study group is slightly contaminated with ochratoxin A. Ten samples of urine contained ochratoxin A above its average content in tested samples. Moreover, serum of 8 women contains ochratoxin A at a level above the average content of this mycotoxin in tested samples. The average ochratoxin A level in serum in the presented studies was 0.4649 ng/ml, which is much lower than the average serum ochratoxin A level established in several countries in the world, i.e., 0.7 ng/ml. Acknowledgment: This study was supported by the Polish Minister of Science and Higher Education under the program 'Regional Initiative of Excellence' in 2019 - 2022 (Grant No. 008/RID/2018/19).

Keywords: breast-milk, urine, serum, contamination, ochratoxin A

Procedia PDF Downloads 128
13384 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software

Authors: Marian Sagat, Mariana Remesikova

Abstract:

In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.

Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software

Procedia PDF Downloads 134
13383 Red Clay Properties and Application for Ceramic Production

Authors: Ruedee Niyomrath

Abstract:

This research aimed at surveying the local red clay raw material sources in Samut Songkram province, Thailand to test the physical and chemical properties of the local red clay, including to find the approach to develop the local red clay properties for ceramic production. The findings of this research would be brought to apply in the ceramic production industry of the country all at the upstream level which was the community in the raw material source, at the mid water level which was the ceramic producer and at the downstream level which was the distributor and the consumer as well as the community producer who would apply them to their identity and need of the community business.

Keywords: chemical properties of red clay, physical properties of red clay, ceramic production, red clay product

Procedia PDF Downloads 437
13382 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 263
13381 Characterization of Titanium -Niobium Alloys by Powder Metallurgy as İmplant

Authors: Eyyüp Murat Karakurt, Yan Huang, Mehmet Kaya, Hüseyin Demirtaş, Alper İncesu

Abstract:

In this study, Ti-(x) Nb (at. %) master alloys (x:10, 20, and 30) were fabricated following a standard powder metallurgy route and were sintered at 1200 ˚C for 6h, under 300 MPa by powder metallurgy method. The effect of the Nb concentration in Ti matrix and porosity level was examined experimentally. For metallographic examination, the alloys were analysed by optical microscopy and energy dispersive spectrometry analysis. In addition, X-ray diffraction was performed on the alloys to determine which compound formed in the microstructure. The compression test was applied to the alloys to understand the mechanical behaviors of the alloys. According to Nb concentration in Ti matrix, the β phase increased. Also, porosity level played a crucial role on the mechanical performance of the alloys.

Keywords: Nb concentration, porosity level, powder metallurgy, The β phase

Procedia PDF Downloads 249
13380 Effect of Low Level Laser for Athletic Achilles Tendinopathy: A Systematic Review

Authors: Sameh Eldaly, Rola Essam

Abstract:

Objective: The purpose of this study was to determine the benefits of low-level laser therapy for Athletic Achilles Tendinopathy. Data sources: Search strategies were conducted on 2 Randomized control trial and one pilot study. Results: three trials (103 participants) were analyzed. Laser therapy associated to eccentric exercises, when compared to eccentric exercises and placebo, had low to very low certainty of evidence in pain and function assessment. Conclusion: those three trials evidenced low to very low effect of LLLT, and the results are insufficient to support the routine use LLLT for Achilles tendinopathy.

Keywords: achilles tendinopathy, evidence-based, low-level laser therapy, review

Procedia PDF Downloads 71
13379 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System

Authors: Hao Wang, Shuguo Pan

Abstract:

The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.

Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm

Procedia PDF Downloads 82
13378 Efficient Alias-Free Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide alias-free high-fidelity signal reconstruction for speech signals without exponentially increasing sample number with increasing bit-depth. We introduce methods in LC sampling that reduce the sampling rate close to the Nyquist frequency even for large bit-depth. The results indicate that larger variation in the sampling intervals leads to an alias-free sampling scheme; this is achieved by either reducing the bit-depth or adding jitter to the system for high bit-depths. In conjunction with windowing, the signal is reconstructed from the LC samples using an efficient Toeplitz reconstruction algorithm.

Keywords: alias-free, level crossing sampling, spectrum, trigonometric polynomial

Procedia PDF Downloads 201
13377 The Impact of Financial Literacy, Perception of Debt, and Perception of Risk Toward Student Willingness to Use Online Student Loan

Authors: Irni Rahmayani Johan, Ira Kamelia

Abstract:

One of the impacts of the rapid advancement of technology is the rise of digital finance, including peer-to-peer lending (P2P). P2P lending has been widely marketed, including an online student loan that used the P2P platform. This study aims to analyze the effect of financial literacy, perception of debt, and perception of risk toward student willingness to use the online student loan (P2P lending). Using a cross-sectional study design, in collecting the data this study employed an online survey method, with a total sample of 280 undergraduate students of IPB university, Indonesia. This study found that financial literacy, perception of debt, perception of risk, and interest in using online student loans are categorized as low level. While the level of knowledge is found to be the lowest, the first-year students showed a higher level in terms of willingness to use the online student loan. In addition, the second year students recorded a positive perception toward debt. This study showed that level of study, attendance in personal finance course, and student’ GPA is positively related to financial knowledge. While debt perception is negatively related to financial attitudes. Similarly, the negative relationship is found between risk perception and the willingness to use the online student loan. The determinant factor of the willingness to use online student loans is the level of study, debt perception, financial risk perception, and time risk perception. Students with a higher level of study are more likely to have a lower interest in using online student loans. Moreover, students who perceived debt as a financial stimulator, as well as those with higher level of financial risk perceptions and time risk perceptions, tend to show more interest to use the loan.

Keywords: financial literacy, willingness to use, online student loan, perception of risk, perception of debt

Procedia PDF Downloads 132
13376 Readiness of Thai Restaurant in Bangkok in Applying for Certification of Halal Food Services Standard for Tourism

Authors: Pongsiri Kingkan

Abstract:

This research aims to study the Readiness of Thai Restaurant in Bangkok in Applying for Certification of Halal Food Services Standard for Tourism. This research was conduct by using mix methodology; both quantitative and qualitative data were used. 420 questionnaires were used as tools to collected data from the samples, the restaurant employees. The results were divided into two parts, the demographic data and the Readiness of Thai Restaurant in Bangkok in Applying for Certification of Halal Food Services Standard for Tourism. The majority of samples are single female age between 18–30 years old, who earn about 282.40 US dollars a month. The result of Thai restaurant readiness study demonstrated that readiness in foods and restaurant operating processes were scored at the lowest level. Readiness in social responsibility, food contact persons and food materials were rated at the low level. The readiness of utensils and kitchen tools, waste management, environmental management, and the availability of space to implement the establishment of halal food were scored at the average level. Location readiness, foods service safety and the relationship with the local community were rated at high level. But interestingly there is none of them rated at the highest level.

Keywords: availability, Bangkok, halal, Thai restaurant, readiness

Procedia PDF Downloads 305
13375 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics

Authors: Michael Lousis

Abstract:

This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.

Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors

Procedia PDF Downloads 181
13374 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 107
13373 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 134
13372 Happiness of Undergraduate Nursing Students, College of Nursing, Ratchaburi, Thailand

Authors: Paveenapat Nithitantiwat, Kwanjai Pataipakaipet

Abstract:

The purpose of this research was to study the happiness level of nursing students, Boromarajonani College of nursing, Ratchaburi, Thailand. A purposive sampling of 652 first to four-year nursing students was used. This research is descriptive research. The instruments were questionnaires that developed by the researcher. It included the demographic data and nursing student’s perception about healthcare, safety, life security, family, proud of oneself, education and activities, dormitories and environment in college, and how to improve their happiness. Frequencies, percentage, mean, and T-test is used to analysis the data. The results of the research have shown that family and moral value was an important thing in nursing student’s life. In addition, the mean of the happiness level was a high level. The first year nursing students had the higher mean score of the happiness level than the fourth year, second year, and the third year, respectively. Therefore, nursing students would realize that the important things in their life are family and Buddhism’s teaching. In addition, dharma is guideline how to be both academic achievements and successful in life.

Keywords: happiness, nursing students, nursing students’ perceptions, bachelor program

Procedia PDF Downloads 311
13371 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 288
13370 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal

Authors: Linta Rose, Prasad K. Bhaskaran

Abstract:

Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.

Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind

Procedia PDF Downloads 208
13369 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: household, government expenditures, vector error correction model, johansen test

Procedia PDF Downloads 47
13368 Establishing Econometric Modeling Equations for Lumpy Skin Disease Outbreaks in the Nile Delta of Egypt under Current Climate Conditions

Authors: Abdelgawad, Salah El-Tahawy

Abstract:

This paper aimed to establish econometrical equation models for the Nile delta region in Egypt, which will represent a basement for future predictions of Lumpy skin disease outbreaks and its pathway in relation to climate change. Data of lumpy skin disease (LSD) outbreaks were collected from the cattle farms located in the provinces representing the Nile delta region during 1 January, 2015 to December, 2015. The obtained results indicated that there was a significant association between the degree of the LSD outbreaks and the investigated climate factors (temperature, wind speed, and humidity) and the outbreaks peaked during the months of June, July, and August and gradually decreased to the lowest rate in January, February, and December. The model obtained depicted that the increment of these climate factors were associated with evidently increment on LSD outbreaks on the Nile Delta of Egypt. The model validation process was done by the root mean square error (RMSE) and means bias (MB) which compared the number of LSD outbreaks expected with the number of observed outbreaks and estimated the confidence level of the model. The value of RMSE was 1.38% and MB was 99.50% confirming that this established model described the current association between the LSD outbreaks and the change on climate factors and also can be used as a base for predicting the of LSD outbreaks depending on the climatic change on the future.

Keywords: LSD, climate factors, Nile delta, modeling

Procedia PDF Downloads 278
13367 A Holistic Approach to Institutional Cyber Security

Authors: Mehmet Kargaci

Abstract:

It is more important to access information than to get the correct information and to transform it to the knowledge in a proper way. Every person, organizations or governments who have the knowledge now become the target. Cyber security involves the range of measures to be taken from individual to the national level. The National institutions refer to academic, military and major public and private institutions, which are very important for the national security. Thus they need further cyber security measures. It appears that the traditional cyber security measures in the national level are alone not sufficient, while the individual measures remain in a restricted level. It is evaluated that the most appropriate method for preventing the cyber vulnerabilities rather than existing measures are to develop institutional measures. This study examines the cyber security measures to be taken, especially in the national institutions.

Keywords: cyber defence, information, critical infrastructure, security

Procedia PDF Downloads 525
13366 Investigating the Socio-ecological Impacts of Sea Level Rise on Coastal Rural Communities in Ghana

Authors: Benjamin Ankomah-Asare, Richard Adade

Abstract:

Sea level rise (SLR) poses a significant threat to coastal communities globally. Ghana has over the years implemented protective measures such as the construction of groynes and revetment to serve as barriers to sea waves in major cities and towns to prevent sea erosion and flooding. For vulnerable rural coastal communities, the planned retreat is often proposed; however, relocation costs are often underestimated as losses of future social and cultural value are not always adequately taken into account. Through a mixed-methods approach combining qualitative interviews, surveys, and spatial analysis, the study examined the experiences of coastal rural communities in Ghana and assess the effectiveness of relocation strategies in addressing the socio-economic and environmental challenges posed by sea level rise. The study revealed the devastating consequences of sea level rise on these communities, including increased flooding, erosion, and saltwater intrusion into freshwater sources. Moreover, it highlights the adaptive capacities within these communities and how factors such as infrastructure, economic activities, cultural heritage, and governance structures shape their resilience in the face of environmental change. While relocation can be an effective strategy in reducing the risks associated with sea level rise, the study recommends that proper implementation of this adaptation strategy can be achieved when coupled with community-led planning, participatory decision-making, and targeted support for vulnerable groups.

Keywords: sea level rise, relocation, socio-ecological impacts, rural communities

Procedia PDF Downloads 33
13365 Computational Prediction of the Effect of S477N Mutation on the RBD Binding Affinity and Structural Characteristic, A Molecular Dynamics Study

Authors: Mohammad Hossein Modarressi, Mozhgan Mondeali, Khabat Barkhordari, Ali Etemadi

Abstract:

The COVID-19 pandemic, caused by SARS-CoV-2, has led to significant concerns worldwide due to its catastrophic effects on public health. The SARS-CoV-2 infection is initiated with the binding of the receptor-binding domain (RBD) in its spike protein to the ACE2 receptor in the host cell membrane. Due to the error-prone entity of the viral RNA-dependent polymerase complex, the virus genome, including the coding region for the RBD, acquires new mutations, leading to the appearance of multiple variants. These variants can potentially impact transmission, virulence, antigenicity and evasive immune properties. S477N mutation located in the RBD has been observed in the SARS-CoV-2 omicron (B.1.1. 529) variant. In this study, we investigated the consequences of S477N mutation at the molecular level using computational approaches such as molecular dynamics simulation, protein-protein interaction analysis, immunoinformatics and free energy computation. We showed that displacement of Ser with Asn increases the stability of the spike protein and its affinity to ACE2 and thus increases the transmission potential of the virus. This mutation changes the folding and secondary structure of the spike protein. Also, it reduces antibody neutralization, raising concern about re-infection, vaccine breakthrough and therapeutic values.

Keywords: S477N, COVID-19, molecular dynamic, SARS-COV2 mutations

Procedia PDF Downloads 159
13364 Modelling Mode Choice Behaviour Using Cloud Theory

Authors: Leah Wright, Trevor Townsend

Abstract:

Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.

Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty

Procedia PDF Downloads 368
13363 Surface Flattening Assisted with 3D Mannequin Based on Minimum Energy

Authors: Shih-Wen Hsiao, Rong-Qi Chen, Chien-Yu Lin

Abstract:

The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.

Keywords: surface flattening, strain energy, minimum energy, approximate implicit method, fashion design

Procedia PDF Downloads 323
13362 Introduction of Integrated Image Deep Learning Solution and How It Brought Laboratorial Level Heart Rate and Blood Oxygen Results to Everyone

Authors: Zhuang Hou, Xiaolei Cao

Abstract:

The general public and medical professionals recognized the importance of accurately measuring and storing blood oxygen levels and heart rate during the COVID-19 pandemic. The demand for accurate contactless devices was motivated by the need for cross-infection reduction and the shortage of conventional oximeters, partially due to the global supply chain issue. This paper evaluated a contactless mini program HealthyPai’s heart rate (HR) and oxygen saturation (SpO2) measurements compared with other wearable devices. In the HR study of 185 samples (81 in the laboratory environment, 104 in the real-life environment), the mean absolute error (MAE) ± standard deviation was 1.4827 ± 1.7452 in the lab, 6.9231 ± 5.6426 in the real-life setting. In the SpO2 study of 24 samples, the MAE ± standard deviation of the measurement was 1.0375 ± 0.7745. Our results validated that HealthyPai utilizing the Integrated Image Deep Learning Solution (IIDLS) framework, can accurately measure HR and SpO2, providing the test quality at least comparable to other FDA-approved wearable devices in the market and surpassing the consumer-grade and research-grade wearable standards.

Keywords: remote photoplethysmography, heart rate, oxygen saturation, contactless measurement, mini program

Procedia PDF Downloads 124
13361 Linear Prediction System in Measuring Glucose Level in Blood

Authors: Intan Maisarah Abd Rahim, Herlina Abdul Rahim, Rashidah Ghazali

Abstract:

Diabetes is a medical condition that can lead to various diseases such as stroke, heart disease, blindness and obesity. In clinical practice, the concern of the diabetic patients towards the blood glucose examination is rather alarming as some of the individual describing it as something painful with pinprick and pinch. As for some patient with high level of glucose level, pricking the fingers multiple times a day with the conventional glucose meter for close monitoring can be tiresome, time consuming and painful. With these concerns, several non-invasive techniques were used by researchers in measuring the glucose level in blood, including ultrasonic sensor implementation, multisensory systems, absorbance of transmittance, bio-impedance, voltage intensity, and thermography. This paper is discussing the application of the near-infrared (NIR) spectroscopy as a non-invasive method in measuring the glucose level and the implementation of the linear system identification model in predicting the output data for the NIR measurement. In this study, the wavelengths considered are at the 1450 nm and 1950 nm. Both of these wavelengths showed the most reliable information on the glucose presence in blood. Then, the linear Autoregressive Moving Average Exogenous model (ARMAX) model with both un-regularized and regularized methods was implemented in predicting the output result for the NIR measurement in order to investigate the practicality of the linear system in this study. However, the result showed only 50.11% accuracy obtained from the system which is far from the satisfying results that should be obtained.

Keywords: diabetes, glucose level, linear, near-infrared, non-invasive, prediction system

Procedia PDF Downloads 149
13360 Estimation of Train Operation Using an Exponential Smoothing Method

Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono

Abstract:

The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.

Keywords: exponential smoothing method, open data, operation estimation, train schedule

Procedia PDF Downloads 377
13359 Groundwater Level Modelling by ARMA and PARMA Models (Case Study: Qorveh Aquifer)

Authors: Motalleb Byzedi, Seyedeh Chaman Naderi Korvandan

Abstract:

Regarding annual statistics of groundwater level resources about current piezometers at Qorveh plains, both ARMA & PARMA modeling methods were applied in this study by the using of SAMS software. Upon performing required tests, a model was used with minimum amount of Akaike information criteria and suitable model was selected for piezometers. Then it was possible to make necessary estimations by using these models for future fluctuations in each piezometer. According to the results, ARMA model had more facilities for modeling of aquifer. Also it was cleared that eastern parts of aquifer had more failures than other parts. Therefore it is necessary to prohibit critical parts along with more supervision on taking rates of wells.

Keywords: qorveh plain, groundwater level, ARMA, PARMA

Procedia PDF Downloads 272
13358 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 381
13357 Assessment of Level of Sedation and Associated Factors Among Intubated Critically Ill Children in Pediatric Intensive Care Unit of Jimma University Medical Center: A Fourteen Months Prospective Observation Study, 2023

Authors: Habtamu Wolde Engudai

Abstract:

Background: Sedation can be provided to facilitate a procedure or to stabilize patients admitted in pediatric intensive care unit (PICU). Sedation is often necessary to maintain optimal care for critically ill children requiring mechanical ventilation. However, if sedation is too deep or too light, it has its own adverse effects, and hence, it is important to monitor the level of sedation and maintain an optimal level. Objectives: The objective is to assess the level of sedation and associated factors among intubated critically ill children admitted to PICU of JUMC, Jimma. Methods: A prospective observation study was conducted in the PICU of JUMC in September 2021 in 105 patients who were going to be admitted to the PICU aged less than 14 and with GCS >8. Data was collected by residents and nurses working in PICU. Data entry was done by Epi data manager (version 4.6.0.2). Statistical analysis and the creation of charts is going to be performed using SPSS version 26. Data was presented as mean, percentage and standard deviation. The assumption of logistic regression and the result of the assumption will be checked. To find potential predictors, bi-variable logistic regression was used for each predictor and outcome variable. A p value of <0.05 was considered as statistically significant. Finally, findings have been presented using figures, AOR, percentages, and a summary table. Result: in this study, 105 critically ill children had been involved who were started on continuous or intermittent forms of sedative drugs. Sedation level was assessed using a comfort scale three times per day. Based on this observation, we got a 44.8% level of suboptimal sedation at the baseline, a 36.2% level of suboptimal sedation at eight hours, and a 24.8% level of suboptimal sedation at sixteen hours. There is a significant association between suboptimal sedation and duration of stay with mechanical ventilation and the rate of unplanned extubation, which was shown by P < 0.05 using the Hosmer-Lemeshow test of goodness of fit (p> 0.44).

Keywords: level of sedation, critically ill children, Pediatric intensive care unit, Jimma university

Procedia PDF Downloads 52
13356 Feasibility of Simulating External Vehicle Aerodynamics Using Spalart-Allmaras Turbulence Model with Adjoint Method in OpenFOAM and Fluent

Authors: Arpit Panwar, Arvind Deshpande

Abstract:

The study of external vehicle aerodynamics using Spalart-Allmaras turbulence model with adjoint method was conducted. The accessibility and ease of working with the Fluent module of ANSYS and OpenFOAM were considered. The objective of the study was to understand and analyze the possibility of bringing high-level aerodynamic simulation to the average consumer vehicle. A form-factor of BMW M6 vehicle was designed in Solidworks, which was analyzed in OpenFOAM and Fluent. The turbulence model being a single equation provides much faster convergence rate when clubbed with the adjoint method. Fluent being commercial software still does not allow us to solve Spalart-Allmaras turbulence model using the adjoint method. Hence, the turbulence model was solved using the SIMPLE method in Fluent. OpenFOAM being an open source provide flexibility in simulation but is not user-friendly. It supports solving the defined turbulence model with the adjoint method. The result generated from the simulation gives us acceptable values of drag, when validated with the result of percentage error in drag values for a notch-back vehicle model on an extensive simulation produced at 6th ANSA and μETA conference, Greece. The success of this approach will allow us to bring more aerodynamic vehicle body design to all segments of the automobile and not limiting it to just the high-end sports cars.

Keywords: Spalart-Allmaras turbulence model, OpenFOAM, adjoint method, SIMPLE method, vehicle aerodynamic design

Procedia PDF Downloads 192