Search results for: single error upset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6283

Search results for: single error upset

5383 Investigation of Chlorophylls a and b Interaction with Inner and Outer Surfaces of Single-Walled Carbon Nanotube Using Molecular Dynamics Simulation

Authors: M. Dehestani, M. Ghasemi-Kooch

Abstract:

In this work, adsorption of chlorophylls a and b pigments in aqueous solution on the inner and outer surfaces of single-walled carbon nanotube (SWCNT) has been studied using molecular dynamics simulation. The linear interaction energy algorithm has been used to calculate the binding free energy. The results show that the adsorption of two pigments is fine on the both positions. Although there is the close similarity between these two pigments, their interaction with the nanotube is different. This result is useful to separate these pigments from one another. According to interaction energy between the pigments and carbon nanotube, interaction between these pigments-SWCNT on the inner surface is stronger than the outer surface. The interaction of SWCNT with chlorophylls phytol tail is stronger than the interaction of SWCNT with porphyrin ring of chlorophylls.

Keywords: adsorption, chlorophyll, interaction, molecular dynamics simulation, nanotube

Procedia PDF Downloads 229
5382 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh

Authors: Md. Qamruzzaman, Wei Jianguo

Abstract:

Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.

Keywords: financial innovation, economic growth, GDP, financial institution, VECM

Procedia PDF Downloads 262
5381 An Electrochemical DNA Biosensor Based on Oracet Blue as a Label for Detection of Helicobacter pylori

Authors: Saeedeh Hajihosseini, Zahra Aghili, Navid Nasirizadeh

Abstract:

An innovative method of a DNA electrochemical biosensor based on Oracet Blue (OB) as an electroactive label and gold electrode (AuE) for detection of Helicobacter pylori, was offered. A single–stranded DNA probe with a thiol modification was covalently immobilized on the surface of the AuE by forming an Au–S bond. Differential pulse voltammetry (DPV) was used to monitor DNA hybridization by measuring the electrochemical signals of reduction of the OB binding to double– stranded DNA (ds–DNA). Our results showed that OB–based DNA biosensor has a decent potential for detection of single–base mismatch in target DNA. Selectivity of the proposed DNA biosensor was further confirmed in the presence of non–complementary and complementary DNA strands. Under optimum conditions, the electrochemical signal had a linear relationship with the concentration of the target DNA ranging from 0.3 nmol L-1 to 240.0 nmol L-1, and the detection limit was 0.17 nmol L-1, whit a promising reproducibility and repeatability.

Keywords: DNA biosensor, oracet blue, Helicobacter pylori, electrode (AuE)

Procedia PDF Downloads 261
5380 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece

Authors: Panagiotis Karadimos, Leonidas Anthopoulos

Abstract:

Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.

Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA

Procedia PDF Downloads 127
5379 Open Data for e-Governance: Case Study of Bangladesh

Authors: Sami Kabir, Sadek Hossain Khoka

Abstract:

Open Government Data (OGD) refers to all data produced by government which are accessible in reusable way by common people with access to Internet and at free of cost. In line with “Digital Bangladesh” vision of Bangladesh government, the concept of open data has been gaining momentum in the country. Opening all government data in digital and customizable format from single platform can enhance e-governance which will make government more transparent to the people. This paper presents a well-in-progress case study on OGD portal by Bangladesh Government in order to link decentralized data. The initiative is intended to facilitate e-service towards citizens through this one-stop web portal. The paper further discusses ways of collecting data in digital format from relevant agencies with a view to making it publicly available through this single point of access. Further, possible layout of this web portal is presented.

Keywords: e-governance, one-stop web portal, open government data, reusable data, web of data

Procedia PDF Downloads 346
5378 Distributed Actor System for Traffic Simulation

Authors: Han Wang, Zhuoxian Dai, Zhe Zhu, Hui Zhang, Zhenyu Zeng

Abstract:

In traditional microscopic traffic simulation, various approaches have been suggested to implement the single-agent behaviors about lane changing and intelligent driver model. However, when it comes to very large metropolitan areas, microscopic traffic simulation requires more resources and become time-consuming, then macroscopic traffic simulation aggregate trends of interests rather than individual vehicle traces. In this paper, we describe the architecture and implementation of the actor system of microscopic traffic simulation, which exploits the distributed architecture of modern-day cloud computing. The results demonstrate that our architecture achieves high-performance and outperforms all the other traditional microscopic software in all tasks. To the best of our knowledge, this the first system that enables single-agent behavior in macroscopic traffic simulation. We thus believe it contributes to a new type of system for traffic simulation, which could provide individual vehicle behaviors in microscopic traffic simulation.

Keywords: actor system, cloud computing, distributed system, traffic simulation

Procedia PDF Downloads 184
5377 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 54
5376 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 342
5375 Temperature Coefficients of the Refractive Index for Ge Film

Authors: Lingmao Xu, Hui Zhou

Abstract:

Ge film is widely used in infrared optical systems. Because of the special requirements of space application, it is usually used in low temperature. The refractive index of Ge film is always changed with the temperature which has a great effect on the manufacture of high precision infrared optical film. Specimens of Ge single film were deposited at ZnSe substrates by EB-PVD method. During temperature range 80K ~ 300K, the transmittance of Ge single film within 2 ~ 15 μm were measured every 20K by PerkinElmer FTIR cryogenic testing system. By the full spectrum inversion method fitting, the relationship between refractive index and wavelength within 2 ~ 12μm at different temperatures was received. It can be seen the relationship consistent with the formula Cauchy, which can be fitted. Then the relationship between refractive index of the Ge film and temperature/wavelength was obtained by fitting method based on formula Cauchy. Finally, the designed value obtained by the formula and the measured spectrum were compared to verify the accuracy of the formula.

Keywords: infrared optical film, low temperature, thermal refractive coefficient, Ge film

Procedia PDF Downloads 290
5374 A Comparison of Single of Decision Tree, Decision Tree Forest and Group Method of Data Handling to Evaluate the Surface Roughness in Machining Process

Authors: S. Ghorbani, N. I. Polushin

Abstract:

The machinability of workpieces (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron) in turning operation has been carried out using different types of cutting tool (conventional, cutting tool with holes in toolholder and cutting tool filled up with composite material) under dry conditions on a turning machine at different stages of spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). Experimentation was performed as per Taguchi’s orthogonal array. To evaluate the relative importance of factors affecting surface roughness the single decision tree (SDT), Decision tree forest (DTF) and Group method of data handling (GMDH) were applied.

Keywords: decision tree forest, GMDH, surface roughness, Taguchi method, turning process

Procedia PDF Downloads 434
5373 Adopting a Stakeholder Perspective to Profile Successful Sustainable Circular Business Approaches: A Single Case Study

Authors: Charleen von Kolpinski, Karina Cagarman, Alina Blaute

Abstract:

The circular economy concept is often framed by politicians, scientists and practitioners as being the solution to sustainability problems of our times. However, the focus of these discussions and publications is very often set on environmental and economic aspects. In contrast, the social dimension of sustainability has been neglected and only a few recent and mostly conceptual studies targeted the inclusion of social aspects and the SDGs into circular economy research. All stakeholders of this new circular system have to be included to represent a truly sustainable solution to all the environmental, economic and social challenges caused by the linear economic system. Hence, this empirical research aims to analyse, next to the environmental and economic dimension, also explicitly the social dimension of a sustainable circular business model. This inductive and explorative approach applies the single case study method. A multi-stakeholder view is adopted to shed light on social aspects of the circular business model. Different stakeholder views, tensions between stakeholders and conflicts of interest are detected. In semi-structured interviews with different stakeholders of the company, this study compares the different stakeholder views to profile the success factors of its business model in terms of sustainability implementation and to detect its shortcomings. These findings result in the development of propositions which cover different social aspects of sustainable circular business model implementation. This study is an answer to calls for future empirical research about the social dimension of the circular economy and contributes to sustainable business model thinking in entrepreneurial contexts of the circular economy. It helps identifying all relevant stakeholders and their needs to successfully and inclusively implement a sustainable circular business model. The method of a single case study has some limitations by nature as it only covers one enterprise with its special business model. Therefore, more empirical studies are needed to research sustainable circular business models from multiple stakeholder perspectives, in different countries and industries. Future research can build upon the developed propositions of this study and develop hypotheses to be tested.

Keywords: circular economy, single case study, social dimension, sustainable circular business model

Procedia PDF Downloads 169
5372 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 280
5371 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 435
5370 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 117
5369 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 41
5368 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 237
5367 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate

Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian

Abstract:

This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.

Keywords: land certificates, QR code, track and trace, packaging

Procedia PDF Downloads 151
5366 Implementation of A Treatment Escalation Plan During The Covid 19 Outbreak in Aneurin Bevan University Health Board

Authors: Peter Collett, Mike Pynn, Haseeb Ur Rahman

Abstract:

For the last few years across the UK there has been a push towards implementing treatment escalation plans (TEP) for every patient admitted to hospital. This is a paper form which is completed by a junior doctor then countersigned by the consultant responsible for the patient's care. It is designed to address what level of care is appropriate for the patient in question at point of entry to hospital. It helps decide whether the patient would benefit for ward based, high dependency or intensive care. They are completed to ensure the patient's best interests are maintained and aim to facilitate difficult decisions which may be required at a later date. For example, a frail patient with significant co-morbidities, unlikely to survive a pathology requiring an intensive care admission is admitted to hospital the decision can be made early to state the patient would not benefit from an ICU admission. This decision can be reversed depending on the clinical course of the patient's admission. It promotes discussions with the patient regarding their wishes to receive certain levels of healthcare. This poster describes the steps taken in the Aneurin Bevan University Health Board (ABUHB) when implementing the TEP form. The team implementing the TEP form campaigned for it's use to the board of directors. The directors were eager to hear of experiences of other health boards who had implemented the TEP form. The team presented the data produced in a number of health boards and demonstrated the proposed form. Concern was raised regarding the legalities of the form and that it could upset patients and relatives if the form was not explained properly. This delayed the effectuation of the TEP form and further research and discussion would be required. When COVID 19 reached the UK the National Institute for Health and Clinical Excellence issued guidance stating every patient admitted to hospital should be issued a TEP form. The TEP form was accelerated through the vetting process and was approved with immediate effect. The TEP form in ABUHB has now been in circulation for a month. An audit investigating it's uptake and a survey gathering opinions have been conducted.

Keywords: acute medicine, clinical governance, intensive care, patient centered decision making

Procedia PDF Downloads 170
5365 Oxidation Assessment of Mayonnaise with Headspace Single-Drop Microextarction (HS-SDME) Coupled with Gas Chromatography-Mass Spectrometry (GC-MS) during Shelf-Life

Authors: Kooshan Nayebzadeh, Maryam Enteshari, Abdorreza Mohammadi

Abstract:

The oxidative stability of mayonnaise under different storage temperatures (4 and 25˚C) during 6-month shelf-life was investigated by different analytical methods. In this study, headspace single-drop microextarction (HS-SDME) combined with gas chromatography-mass spectrometry (GC-MS) as a green, sensitive and rapid technique was applied to evaluate oxidative state in mayonnaise. Oxidation changes of extracted oil from mayonnaise were monitored by analytical parameters including peroxide value (PV), p-Anisidine value (p-An V), thiobarbituric acid value (TBA), and oxidative stability index (OSI). Hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-SDME/GC-MS method in mayonnaise matrix. The rate of oxidation in mayonnaises increased during storage and it was determined greater at 25 ˚C. The values of Anisidine and TBA were gradually enhanced during 6 months, while the amount of OSI decreased. At both temperatures, the content of hexanal was higher than heptanal during all storage periods. Also significant increments in hexanal and heptanal concentrations in the second and sixth month of storage have been observed. Hexanal concentrations in mayonnaises which were stored at 25 ˚C and during storage time showed the highest values. It can be concluded that the temperature and duration of storage time are definitive parameters which affect on quality and oxidative stability of mayonnaise. Additionally, hexanal content in comparison to heptanal is a more reliable oxidative indicator and HS-SDME/GC-MS can be applied in a quick and simple manner.

Keywords: oxidative stability, mayonnaise, headspace single-drop microextarction (HS-SDME), shelf-life

Procedia PDF Downloads 416
5364 Performance Evaluation of Diverging Diamond Interchange Compared to Single Point Diamond Interchange in Riyadh City

Authors: Maged A. Mogalli, Abdullah I. Al-Mansour, Seongkwan Mark Lee

Abstract:

In the last decades, population growth has gradually exceeded transportation infrastructure growth, and today’s transportation professionals are facing challenge on how to meet the mobility needs of a rising population especially in the absence of adequate public transport, as is the case in Saudi Arabia. The traffic movement congestion can be decreased by carrying out some appropriate alternative designs of interchanges such as diverging diamond interchange (DDI) and single diamond interchange (SPDI). In this paper, evaluation of newly implemented DDIs at the interchange of Makkah road with Prince Turki road and the interchange of King Khaled road with Prince Saud Ibn Mohammed Ibn Mugrin road in Riyadh city was carried out. The comparison between the DDI and SPDI is conducted by evaluating different measures of effectiveness (MOE) such as stop delay, average queue length, and number of stops. In this connection, each interchange type was evaluated for traffic flow at peak hours using micro-simulation program namely 'Synchro/SimTarffic' to measure its effectiveness such as stop delay, average queue length, and number of stops. The results of this study show that DDI provides a better result when compared with SPDI in terms of stope delay, average queue length, and number of stops. The stop delay for the SPDI is greater than DDI by three times. Also, the average queue length is approximately twice that of the SPDI when compared to the DDI. Furthermore, the number of stops for the SPDI is about twice as the DDI.

Keywords: single point diamond interchange, diverging diamond interchange, measures of effectiveness, simulation

Procedia PDF Downloads 251
5363 Solar Architecture of Low-Energy Buildings for Industrial Applications

Authors: P. Brinks, O. Kornadt, R. Oly

Abstract:

This research focuses on the optimization of glazed surfaces and the assessment of possible solar gains in industrial buildings. Existing window rating methods for single windows were evaluated and a new method for a simple analysis of energy gains and losses by single windows was introduced. Furthermore extensive transient building simulations were carried out to appraise the performance of low cost polycarbonate multi-cell sheets in interaction with typical buildings for industrial applications. Mainly, energy-saving potential was determined by optimizing the orientation and area of such glazing systems in dependency on their thermal qualities. Moreover the impact on critical aspects such as summer overheating and daylight illumination was considered to ensure the user comfort and avoid additional energy demand for lighting or cooling. Hereby the simulated heating demand could be reduced by up to 1/3 compared to traditional architecture of industrial halls using mainly skylights.

Keywords: solar architecture, Passive Solar Building Design, glazing, Low-Energy Buildings, industrial buildings

Procedia PDF Downloads 231
5362 Halal Authentication for Some Product Collected from Jordanian Market Using Real-Time PCR

Authors: Omar S. Sharaf

Abstract:

The mitochondrial 12s rRNA (mt-12s rDNA) gene for pig-specific was developed to detect material from pork species in different products collected from Jordanian market. The amplification PCR products of 359 bp and 531 bp were successfully amplified from the cyt b gene of pig the amplification product using mt-12S rDNA gene were successfully produced a single band with a molecular size of 456 bp. In the present work, the PCR amplification of mtDNA of cytochrome b has been shown as a suitable tool for rapid detection of pig DNA. 100 samples from different dairy, gelatin and chocolate based products and 50 samples from baby food formula were collected and tested to a presence of any pig derivatives. It was found that 10% of chocolate based products, 12% of gelatin and 56% from dairy products and 5.2% from baby food formula showed single band from mt-12S rDNA gene.

Keywords: halal food, baby infant formula, chocolate based products, PCR, Jordan

Procedia PDF Downloads 526
5361 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 173
5360 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.

Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions

Procedia PDF Downloads 247
5359 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 137
5358 Knowledge-Attitude-Practice Survey Regarding High Alert Medication in a Teaching Hospital in Eastern India

Authors: D. S. Chakraborty, S. Ghosh, A. Hazra

Abstract:

Objective: Medication errors are a reality in all settings where medicines are prescribed, dispensed and used. High Alert Medications (HAM) are those that bear a heightened risk of causing significant patient harm when used in error. We conducted a knowledge-attitude-practice survey, among residents working in a teaching hospital, to assess the ground situation with regard to the handling of HAM. Methods: We plan to approach 242 residents among the approximately 600 currently working in the hospital through purposive sampling. Residents in all disciplines (clinical, paraclinical and preclinical) are being targeted. A structured questionnaire that has been pretested on 5 volunteer residents is being used for data collection. The questionnaire is being administered to residents individually through face-to-face interview, by two raters, while they are on duty but not during rush hours. Results: Of the 156 residents approached so far, data from 140 have been analyzed, the rest having refused participation. Although background knowledge exists for the majority of respondents, awareness levels regarding HAM are moderate, and attitude is non-uniform. The number of respondents correctly able to identify most ( > 80%) HAM in three common settings– accident and emergency, obstetrics and intensive care unit are less than 70%. Several potential errors in practice have been identified. The study is ongoing. Conclusions: Situation requires corrective action. There is an urgent need for improving awareness regarding HAM for the sake of patient safety. The pharmacology department can take the lead in designing awareness campaign with support from the hospital administration.

Keywords: high alert medication, medication error, questionnaire, resident

Procedia PDF Downloads 125
5357 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain

Authors: Jia Zhang, Fengmei Yao, Yanjing Tan

Abstract:

The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.

Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain

Procedia PDF Downloads 365
5356 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite

Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy

Abstract:

This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite

Procedia PDF Downloads 147
5355 Unified Power Quality Conditioner Presentation and Dimensioning

Authors: Abderrahmane Kechich, Othmane Abdelkhalek

Abstract:

Static converters behave as nonlinear loads that inject harmonic currents into the grid and increase the consumption of the inactive power. On the other hand, the increased use of sensitive equipment requires the application of sinusoidal voltages. As a result, the electrical power quality control has become a major concern in the field of power electronics. In this context, the active power conditioner (UPQC) was developed. It combines both serial and parallel structures; the series filter can protect sensitive loads and compensate for voltage disturbances such as voltage harmonics, voltage dips or flicker when the shunt filter compensates for current disturbances such as current harmonics, reactive currents and imbalance. This double feature is that it is one of the most appropriate devices. Calculating parameters is an important step and in the same time it’s not easy for that reason several researchers based on trial and error method for calculating parameters but this method is not easy for beginners researchers especially what about the controller’s parameters, for that reason this paper gives a mathematical way to calculate of almost all of UPQC parameters away from trial and error method. This paper gives also a new approach for calculating of PI regulators parameters for purpose to have a stable UPQC able to compensate for disturbances acting on the waveform of line voltage and load current in order to improve the electrical power quality.

Keywords: UPQC, Shunt active filer, series active filer, PI controller, PWM control, dual-loop control

Procedia PDF Downloads 395
5354 Effect of Post Hardening on PVD Coated Tools

Authors: Manjinder Bajwa, Mahipal Singh, Ashish Tulli

Abstract:

In the research, the effect of varying cutting parameters, design parameters and heat treatment processes were studied on the cutting performance (Tool life) of a PVD coated tool. Thus, in a quest for these phenomenon comparison, a single coated tool and a multicoated tool were analyzed after suitable heat treatment process. TNMG shaped insert with single coating of TiCN and multi-coating of TiAlN/TiN were developed on tungsten carbide substrate. These coated inserts were then successfully annealed and normalized for a temperature of 350°C for 30 minutes and their cutting performance was evaluated as per the flank wear obtained after turning of mild steel. The results showed that heat treatment had a suitable impact on the tool life of the coated insert and also led to increase in the micro-hardness of the tool coatings and decrease in the wear rate.

Keywords: PVD coatings, flank wear, micro-hardness, annealing, normalizing

Procedia PDF Downloads 344