Search results for: total error rate
16542 Image Distortion Correction Method of 2-MHz Side Scan Sonar for Underwater Structure Inspection
Authors: Youngseok Kim, Chul Park, Jonghwa Yi, Sangsik Choi
Abstract:
The 2-MHz Side Scan SONAR (SSS) attached to the boat for inspection of underwater structures is affected by shaking. It is difficult to determine the exact scale of damage of structure. In this study, a motion sensor is attached to the inside of the 2-MHz SSS to get roll, pitch, and yaw direction data, and developed the image stabilization tool to correct the sonar image. We checked that reliable data can be obtained with an average error rate of 1.99% between the measured value and the actual distance through experiment. It is possible to get the accurate sonar data to inspect damage in underwater structure.Keywords: image stabilization, motion sensor, safety inspection, sonar image, underwater structure
Procedia PDF Downloads 28016541 Cellular Traffic Prediction through Multi-Layer Hybrid Network
Authors: Supriya H. S., Chandrakala B. M.
Abstract:
Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.Keywords: MLHN, network traffic prediction
Procedia PDF Downloads 8916540 Nasopharyngeal Carriage of Streptococcus pneumoniae in Children under 5 Years of Age before Introduction of Pneumococcal Vaccine (PCV 10) in Urban and Rural Sindh
Authors: Muhammad Imran Nisar, Fyezah Jehan, Tauseef Akhund, Sadia Shakoor, Kanwal Nayani, Furqan Kabir, Asad Ali, Anita Zaidi
Abstract:
Pneumococcal Vaccine -10 (PCV 10) was included in the Expanded Program of immunization (EPI) in Sindh, Pakistan in February 2013. This study was carried out immediately before the introduction of PCV 10 to establish baseline pneumococcal carriage and prevalent serotypes in naso-pharynx of children 3-11 months of age in an urban and rural community in Sindh, Pakistan. An additional sample of children aged 12 to 59 months was drawn from the urban community. Nasopharyngeal specimens were collected from a random sample of children. Samples were processed in a central laboratory in Karachi. Pneumococci were cultured on 5% Sheep Blood Agar and serotyping was performed using CDC standardized sequential multiplex PCR assay on bacterial colonies. Serotypes were then categorized into vaccine (PCV-10 and PCV-13) type and non-vaccine types. A total of 670 children were enrolled. Carriage rate for pneumococcus based on culture positivity was 74% and 79.5 % in the infant group in Karachi and Matiari respectively. Carriage rate was 78.2% for children aged 12 to 59 months in Karachi. Proportion of PCV 10 serotypes in infants was 38.8% and 33.5% in Karachi and Matiari respectively. In the older age group in Karachi, the proportion was 30.6%. Most common serotypes were 6A, 6B, 23F, 19A and 18C. This survey establishes vaccine and non-vaccine serotype carriage rate in a vaccine-naïve pediatric population among rural and urban communities in Sindh province. Annually planned surveys in the same communities will inform change in carriage rate after the introduction and uptake of PCV 10 in these communities.Keywords: Naso-Pharyngeal carriage, Pakistan, PCV10, Pneumococcus
Procedia PDF Downloads 30016539 Effect of Saturation and Deformation Rate on Split Tensile Strength for Various Sedimentary Rocks
Authors: D. K. Soni
Abstract:
A study of engineering properties of stones, i.e. compressive strength, tensile strength, modulus of elasticity, density, hardness were carried out to explore the possibility of optimum utilization of stone. The laboratory test results on equally dimensioned discs of the stone show a considerable variation in computed split tensile strength with varied rates of deformation. Hence, the effect of strain rate on the tensile strength of a sand stone and lime stone under wet and dry conditions has been studied experimentally using the split tensile strength test technique. It has been observed that the tensile strength of these stone is very much dependent on the rate of deformation particularly in a dry state. On saturation the value of split tensile strength reduced considerably depending upon the structure of rock and amount of water absorption.Keywords: sedimentary rocks, split tensile test, deformation rate, saturation rate, sand stone, lime stone
Procedia PDF Downloads 40916538 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model
Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David
Abstract:
The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.Keywords: national development, granite, profitability assessment, ANN models
Procedia PDF Downloads 10116537 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 6516536 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique
Authors: Sahar Tabarroki, Ahad Nazari
Abstract:
The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.Keywords: architectural design, design error, risk management, risk factor
Procedia PDF Downloads 13016535 Adaptive Transmission Scheme Based on Channel State in Dual-Hop System
Authors: Seung-Jun Yu, Yong-Jun Kim, Jung-In Baik, Hyoung-Kyu Song
Abstract:
In this paper, a dual-hop relay based on channel state is studied. In the conventional relay scheme, a relay uses the same modulation method without reference to channel state. But, a relay uses an adaptive modulation method with reference to channel state. If the channel state is poor, a relay eliminates latter 2 bits and uses Quadrature Phase Shift Keying (QPSK) modulation. If channel state is good, a relay modulates the received symbols with 16-QAM symbols by using 4 bits. The performance of the proposed scheme for Symbol Error Rate (SER) and throughput is analyzed.Keywords: adaptive transmission, channel state, dual-hop, hierarchical modulation, relay
Procedia PDF Downloads 38016534 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation
Authors: Hangsik Shin
Abstract:
The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.Keywords: peak detection, photoplethysmography, sampling, signal reconstruction
Procedia PDF Downloads 36816533 Evaluation of Requests and Outcomes of Magnetic Resonance Imaging Assessing for Cauda Equina Syndrome at a UK Trauma Centre
Authors: Chris Cadman, Marcel Strauss
Abstract:
Background: In 2020, the University Hospital Wishaw in the United Kingdom became the centre for trauma and orthopaedics within its health board. This resulted in the majority of patients with suspected cauda equina syndrome (CES) being assessed and imaged at this site, putting an increased demand on MR imaging and displacing other previous activity. Following this transition, imaging requests for CES did not always follow national guidelines and would often be missing important clinical and safety information. There also appeared to be a very low positive scan rate compared with previously reported studies. In an attempt to improve patient selection and reduce the burden of CES imaging at this site clinical audit was performed. Methods: A total of 250 consecutive patients imaged to assess for CES were evaluated. Patients had to have presented to either the emergency or orthopaedic department acutely with a presenting complaint of suspected CES. Patients were excluded if they were not admitted acutely or were assessed by other clinical specialities. In total, 233 patients were included. Requests were assessed for appropriate clinical history, accurate and complete clinical assessment and MRI safety information. Clinical assessment was allocated a score of 1-6 based on information relating to history of pain, level of pain, dermatomes/myotomes affected, peri-anal paraesthesia/anaesthesia, anal tone and post-void bladder volume with each element scoring one point. Images were assessed for positive findings of CES, acquired spinal stenosis or nerve root compression. Results: Overall, 73% of requests had a clear clinical history of CES. The urgency of the request for imaging was given in 23% of cases. The mean clinical assessment score was 3.7 out of a total of 6. Overall, 2% of scans were positive for CES, 29% had acquired spinal stenosis and 30% had nerve root compression. For patients with CES, 75% had acute neurological signs compared with 68% of the study population. CES patients had a mean clinical history score of 5.3 compared with 3.7 for the study population. Overall, 95% of requests had appropriate MRI safety information. Discussion: it study included 233 patients who underwent specialist assessment and referral for MR imaging for suspected CES. Despite the serious nature of this condition, a large proportion of imaging requests did not have a clear clinical query of CES and the level of urgency was not given, which could potentially lead to a delay in imaging and treatment. Clinical examination was often also incomplete, which can make triaging of patients presenting with similar symptoms challenging. The positive rate for CES was only 2%, much below other studies which had positive rates of 6–40% with a large meta-analysis finding a mean positive rate of 19%. These findings demonstrate an opportunity to improve the quality of imaging requests for suspected CES. This may help to improve patient selection for imaging and result in a positive rate for CES imaging that is more in line with other centres.Keywords: cauda equina syndrome, acute back pain, MRI, spine
Procedia PDF Downloads 1116532 Associations between Physical Activity and Risk Factors for Type II Diabetes in Prediabetic Adults
Authors: Rukia Yosuf
Abstract:
Diabetes is a national healthcare crisis related to both macrovascular and microvascular complications. We hypothesized that higher levels of physical activity are associated with lower total and visceral fat mass, lower systolic blood pressure, and increased insulin sensitivity. Participant inclusion criteria: 21-50 years old, BMI ≥ 30 kg/m2, hemoglobin A1C 5.7-6.4, fasting glucose 100-125 mg/dL, and HOMA IR ≥ 2.5. Exclusion criteria: history of diabetes, hypertension, HIV, renal disease, hearing loss, alcoholic intake over four drinks daily, use of organic nitrates or PDE5 inhibitors, and decreased cardiac function. Total physical activity was measured using accelerometers, body composition using DXA, and insulin resistance via fsIVGTT. Clinical and biochemical cardiometabolic risk factors, blood pressure and heart rate were obtained using a calibrated sphygmomanometer. Anthropometric measures, fasting glucose, insulin, lipid profile, C-reactive protein, and BMP were analyzed using standard procedures. Within our study, we found correlations between levels of physical activity in a heterogeneous group of prediabetic adults. Patients with more physical activity had a higher degree of insulin sensitivity, lower blood pressure, total visceral adipose tissue, and overall lower total mass. Total physical activity levels showed small, but significant correlations with systolic blood pressure, visceral fat, lean mass and insulin sensitivity. After normalizing for the race, age, and gender using multiple regression, these associations were no longer significant considering our small sample size. More research into prediabetes will decrease the population of diabetics overall. In the future, we could increase sample size and conduct cross sectional and longitudinal studies in various populations with prediabetes.Keywords: diabetes, kidney disease, nephrology, prediabetes
Procedia PDF Downloads 18716531 Enhancement of coupler-based delay line filters modulation techniques using optical wireless channel and amplifiers at 100 Gbit/s
Authors: Divya Sisodiya, Deepika Sipal
Abstract:
Optical wireless communication (OWC) is a relatively new technology in optical communication systems that allows for high-speed wireless optical communication. This research focuses on developing a cost-effective OWC system using a hybrid configuration of optical amplifiers. In addition to using EDFA amplifiers, a comparison study was conducted to determine which modulation technique is more effective for communication. This research examines the performance of an OWC system based on ASK and PSK modulation techniques by varying OWC parameters under various atmospheric conditions such as rain, mist, haze, and snow. Finally, the simulation results are discussed and analyzed.Keywords: OWC, bit error rate, amplitude shift keying, phase shift keying, attenuation, amplifiers
Procedia PDF Downloads 13216530 Experimental Study on Flooding Phenomena in a Three-Phase Direct Contact Heat Exchanger for the Utilisation in Solar Pond Applications
Authors: Hameed B. Mahood, Ali Sh. Baqir, Alasdair N. Campbell
Abstract:
Experiments to study the limitation of flooding inception of three-phase direct contact condenser have been carried out in a counter-current small diameter vertical condenser. The total column height was 70 cm and 4 cm diameter. Only 48 cm has been used as an active three-phase direct contact condenser height. Vapour pentane with three different initial temperatures (40, 43.5 and 47.5 °C) and water with a constant temperature (19 °C) have been used as a dispersed phase and a continuous phase respectively. Five different continuous phase mass flow rate and four different dispersed phase mass flow rate have been tested throughout the experiments. Dimensionless correlation based on the previous common flooding correlation is proposed to calculate the up flow flooding inception of the three-phase direct contact condenser.Keywords: Three-phase heat exchanger, condenser, solar energy, flooding phenomena
Procedia PDF Downloads 33916529 Distributing Complementary Food Supplement - Yingyangbao Reducing the Anemia in Young Children in a County of Sichuan Province after Wenchuan Earthquake
Authors: Lijuan Wang, Junsheng Huo, Jing Sun, Wenxian Li, Jian Huang, Lin Ling, Yiping Zhou, Chengyu Huang, Jifang Hu
Abstract:
Backgrounds and Objective: This study aimed to evaluate the impact of highly nutrient-dense complementary food supplement-Yingyangbao, at the time of 3 months after Wenchuan earthquake, on the anemia of young children in a county in Sichuan province. Methods: The young children aged 6-23 months in the county were fed one sachet Yingyangbao per day. Yingyangbao were distributed for 15 months for free. The children entering 6 months age would be included. The length, weight and hemoglobin of the children aged 6-29 months were assessed at baseline (n=257) and Yingyangbao intervention for 6 (n=218) and 15 months (n=253) by cluster sampling. Growth status has not been described in the paper. The analysis was conducted based on 6-11, 12-17, 18-23 and 24-29 months. Results: It showed that the hemoglobin concentration in each group among the 4 groups increased by 4.9, 6.4, 8.0, 9.5 g/L after 6 months and 12.7, 11.4, 16.7, 15.7 g/L after 15 months compared to the baseline, respectively. The total anemia prevalence in each group was significantly lower after 6 and 15 months than the baseline (P<0.001), except the 6-11 months group after 6 months because of fewer Yingyangbao consumption. Total moderate anemia rate decreased from 18.3% to 5.5% after 6 months, and kept decreasing to 0.8% after another 9 months. The hemoglobin concentration was significantly correlated with the amount of Yingyangbao consumption(P<0.001) The anemia rate was significantly different based on the Yingyangbao compliance (P<0.001). Conclusion: It was concluded that Yingyangbao which contains quality protein, vitamins and micronutrients intervened 15 months could be effective for the improvement of anemia of young children. The study provides the support that the application of the complementary food supplements to reduce the anemia of young children in the emergency of natural disaster.Keywords: young children, anemia, nutrition intervention, complementary food supplements, Yingyangbao
Procedia PDF Downloads 52616528 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains
Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou
Abstract:
Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing
Procedia PDF Downloads 27016527 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values
Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie
Abstract:
Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.Keywords: initial input, iterative learning control, maximum input, singular values
Procedia PDF Downloads 24116526 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia
Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos
Abstract:
Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.Keywords: logging, growth model, cutting cycle, minimum logging diameter
Procedia PDF Downloads 8816525 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z
Authors: Catarina Cruz, Ana Breda
Abstract:
Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings
Procedia PDF Downloads 16016524 Assessment of Time-variant Work Stress for Human Error Prevention
Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee
Abstract:
For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention
Procedia PDF Downloads 67216523 Banking Sector Development and Economic Growth: Evidence from the State of Qatar
Authors: Fekri Shawtari
Abstract:
The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM
Procedia PDF Downloads 17016522 Wet Sliding Wear and Frictional Behavior of Commercially Available Perspex
Authors: S. Reaz Ahmed, M. S. Kaiser
Abstract:
The tribological behavior of commercially used Perspex was evaluated under dry and wet sliding condition using a pin-on-disc wear tester with different applied loads ranging from 2.5 to 20 N. Experiments were conducted with varying sliding distance from 0.2 km to 4.6 km, wherein the sliding velocity was kept constant, 0.64 ms-1. The results reveal that the weight loss increases with applied load and the sliding distance. The nature of the wear rate was very similar in both the sliding environments in which initially the wear rate increased very rapidly with increasing sliding distance and then progressed to a slower rate. Moreover, the wear rate in wet sliding environment was significantly lower than that under dry sliding condition. The worn surfaces were characterized by optical microscope and SEM. It is found that surface modification has significant effect on sliding wear performance of Perspex.Keywords: Perspex, wear, friction, SEM
Procedia PDF Downloads 27216521 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)
Authors: Ayobami Solomon Popoola
Abstract:
Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.Keywords: blanching, kinetics, sugar losses, water yam
Procedia PDF Downloads 16516520 The Determination of Total Microbial Count and Prevalence of Salmonella in the Shrimp Supply in Khuzestan Province
Authors: Sana Mohammad Jafar
Abstract:
Salmonella is one of the major causes of foodborne diseases throughout the world. Shrimp are an important commodity in world fishery trade. The microbiological quality of shrimp must be evaluated for assurance of shrimp. The aim of this study was to evaluate the microbiological quality and to determine the prevalence of Salmonella in shrimp sold in Khuzestan province. In this study, a total of 245 samples of shrimp sold in Khuzestan province were tested for Salmonella prevalence and total microbial population. The mean aerobic bacterial count in 50.2% of samples was 2200, in 29.8% of samples was 13,600, in 20% of samples was 36,700, and the mean aerobic bacterial count in the total samples was 20,000. (20,000 cfu/cc). Of the total samples, 33 samples were positive for Salmonella and the prevalence of Salmonella was determined 13.4%. These results indicate the possibility that shrimp contribute to foodborne infections. The improvement of shrimp quality is an important issue, and shrimp before consuming should be washed with water containing chlorine, with the aim of increasing safety. In addition, it should be avoided to eat shrimp as raw or not cooked properly.Keywords: determination, total microbial, Salmonella, shrimp
Procedia PDF Downloads 24016519 Virtual Assessment of Measurement Error in the Fractional Flow Reserve
Authors: Keltoum Chahour, Mickael Binois
Abstract:
Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift
Procedia PDF Downloads 13416518 A Comprehensive Study of Accounting for Growth in China and India
Authors: Yousef Rostami Gharainy
Abstract:
We look at the late financial exhibitions of China and India utilizing a simple growth accounting framework that creates assessments of the commitment of work, capital, training, and aggregate variable profitability for the three parts of agribusiness, industry, and administrations and in addition for the total economy. Our examination consolidates late information updates in both nations and incorporates broad examination of the basic information arrangement. The development records demonstrate a generally square with division in each nation between the commitments of capital gathering and TFP to development in yield every specialist over the period 1980-2007, and an increasing speed of development when the period is separated at 1993. Be that as it may, the size of yield development in China is generally twofold that of India at the total level, and additionally higher in each of the three segments in both sub-periods. In China the post-1993 increasing speed was amassed generally in industry, which contributed about 61 percent of China’s total efficiency development. Interestingly, 48 percent of the development in India in the second sub-period came in administrations. Reallocation of specialists from farming to industry and administrations has contributed 1.3 rate focuses to efficiency development in every nation.Keywords: China, India, growth accounting framework, work, capital, training, aggregate variable profitability
Procedia PDF Downloads 29716517 Soil Properties and Yam Performance as Influenced by Poultry Manure and Tillage on an Alfisol in Southwestern Nigeria
Authors: E. O. Adeleye
Abstract:
Field experiments were conducted to investigate the effect of soil tillage techniques and poultry manure application on the soil properties and yam (Dioscorea rotundata) performance in Ondo, southwestern Nigeria for two farming seasons. Five soil tillage techniques, namely ploughing (P), ploughing plus harrowing (PH), manual ridging (MR), manual heaping (MH) and zero-tillage (ZT) each combined with and without poultry manure at the rate of 10 tha-1 were investigated. Data were obtained on soil properties, nutrient uptake, growth and yield of yam. Soil moisture content, bulk density, total porosity and post harvest soil chemical characteristics were significantly (p>0.05) influenced by soil tillage-manure treatments. Addition of poultry manure to the tillage techniques in the study increased soil total porosity, soil moisture content and reduced soil bulk density. Poultry manure improved soil organic matter, total nitrogen, available phosphorous, exchangeable Ca, k, leaf nutrients content of yam, yam growth and tuber yield relative to tillage techniques plots without poultry manure application. It is concluded that the possible deleterious effect of tillage on soil properties, growth and yield of yam on an alfisol in southwestern Nigeria can be reduced by combining tillage with poultry manure.Keywords: poultry manure, tillage, soil chemical properties, yield
Procedia PDF Downloads 44616516 Modelling Vehicle Fuel Consumption Utilising Artificial Neural Networks
Authors: Aydin Azizi, Aburrahman Tanira
Abstract:
The main source of energy used in this modern age is fossil fuels. There is a myriad of problems that come with the use of fossil fuels, out of which the issues with the greatest impact are its scarcity and the cost it imposes on the planet. Fossil fuels are the only plausible option for many vital functions and processes; the most important of these is transportation. Thus, using this source of energy wisely and as efficiently as possible is a must. The aim of this work was to explore utilising mathematical modelling and artificial intelligence techniques to enhance fuel consumption in passenger cars by focusing on the speed at which cars are driven. An artificial neural network with an error less than 0.05 was developed to be applied practically as to predict the rate of fuel consumption in vehicles.Keywords: mathematical modeling, neural networks, fuel consumption, fossil fuel
Procedia PDF Downloads 40516515 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 9016514 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 16316513 Capturing the Stress States in Video Conferences by Photoplethysmographic Pulse Detection
Authors: Jarek Krajewski, David Daxberger
Abstract:
We propose a stress detection method based on an RGB camera using heart rate detection, also known as Photoplethysmography Imaging (PPGI). This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. A stationary lab setting with simulated video conferences is chosen using constant light conditions and a sampling rate of 30 fps. The ground truth measurement of heart rate is conducted with a common PPG system. The proposed approach for pulse peak detection is based on a machine learning-based approach, applying brute force feature extraction for the prediction of heart rate pulses. The statistical analysis showed good agreement (correlation r = .79, p<0.05) between the reference heart rate system and the proposed method. Based on these findings, the proposed method could provide a reliable, low-cost, and contactless way of measuring HR parameters in daily-life environments.Keywords: heart rate, PPGI, machine learning, brute force feature extraction
Procedia PDF Downloads 123