Search results for: quantification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4110

Search results for: quantification accuracy

2700 Evolutions of Structural Properties of Native Phospho Casein (NPC) Powder during Storage

Authors: Sarah Nasser, Anne Moreau, Alain Hedoux, Romain Jeantet, Guillaume Delaplace

Abstract:

Background: Spray dryed powders containing some caseins are commonly produced in dairy industry. It is widely admitted that the structure of casein evolves during powder storage, inducing a loss of solubility. However few studies evaluate accurately the destabilization mechanisms at molecular and mesoscopic level, in particular for Native Phospho Casein powder (NPC). Consequently, at the state of the art, it is very difficult to assess which secondary structure change or crosslinks initiate insolubility during storage. To address this issue, controlled ageing conditions have been applied to a NPC powder (which was obtained by spray drying a concentrate containing a higher content of casein (90%), whey protein (8%) and lactose (few %)). Evolution of structure and loss of solubility, with the effects of temperature and time of storage were systematically reported. Methods: FTIR spectroscopy, Raman and Circular Dichroism were used to monitor changes of secondary structure in dry powder and in solution after rehydration. Besides, proteomic tools and electrophoresis have been performed after varying storage conditions for evaluating aggregation and post translational modifications, like lactosylation or phosphorylation. Finally, Tof Sims and MEB were used to follow in parallel evolution of structure in surface and skin formation due to storage. Results + conclusion: These results highlight the important role of storage temperature in the stability of NPC. It is shown that this is not lactosylation at the heart of formation of aggregates, as advanced in others publications This is almost the rise of multitude post translational modifications (chemical cross link), added to disulphide bridges (physical cross link) wich contribute to the destabilisation of structure and aggregation of casein. A relative quantification of each kind of cross link, source of aggregates, is proposed. In addition, it has been proved that migration of lipids and formation of skin in surface during the ageing also explains the evolution of structure casein and thus the alterations of functional properties of NPC powder.

Keywords: casein, cross link, powder, storage

Procedia PDF Downloads 376
2699 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 330
2698 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 29
2697 Close Loop Controlled Current Nerve Locator

Authors: H. A. Alzomor, B. K. Ouda, A. M. Eldeib

Abstract:

Successful regional anesthesia depends upon precise location of the peripheral nerve or nerve plexus. Locating peripheral nerves is preferred to be done using nerve stimulation. In order to generate a nerve impulse by electrical means, a minimum threshold stimulus of current “rheobase” must be applied to the nerve. The technique depends on stimulating muscular twitching at a close distance to the nerve without actually touching it. Success rate of this operation depends on the accuracy of current intensity pulses used for stimulation. In this paper, we will discuss a circuit and algorithm for closed loop control for the current, theoretical analysis and test results and compare them with previous techniques.

Keywords: Close Loop Control (CLC), constant current, nerve locator, rheobase

Procedia PDF Downloads 250
2696 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 381
2695 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 264
2694 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 127
2693 Anisotropic Approach for Discontinuity Preserving in Optical Flow Estimation

Authors: Pushpendra Kumar, Sanjeev Kumar, R. Balasubramanian

Abstract:

Estimation of optical flow from a sequence of images using variational methods is one of the most successful approach. Discontinuity between different motions is one of the challenging problem in flow estimation. In this paper, we design a new anisotropic diffusion operator, which is able to provide smooth flow over a region and efficiently preserve discontinuity in optical flow. This operator is designed on the basis of intensity differences of the pixels and isotropic operator using exponential function. The combination of these are used to control the propagation of flow. Experimental results on the different datasets verify the robustness and accuracy of the algorithm and also validate the effect of anisotropic operator in the discontinuity preserving.

Keywords: optical flow, variational methods, computer vision, anisotropic operator

Procedia PDF Downloads 870
2692 The Two-Lane Rural Analysis and Comparison of Police Statistics and Results with the Help IHSDM

Authors: S. Amanpour, F. Mohamadian, S. A. Tabatabai

Abstract:

With the number of accidents and fatalities in recent years can be concluded that Iran is the status of road accidents, remains in a crisis. Investigate the causes of such incidents in all countries is a necessity. By doing this research, the results of the number and type of accidents and the location of the crash will be available. It is possible to prioritize economic and rational solutions to fix the flaws in the way of short-term the results are all the more strict rules about the desire to have black spots and cursory glance at the change of but results in long-term are desired to change the system or increase the width of the path or add extra track. In general, the relationship between the analysis of the accidents and near police statistics is the number of accidents in one year. This could prove the accuracy of the analysis done.

Keywords: traffic, IHSDM, crash, modeling, Khuzestan

Procedia PDF Downloads 279
2691 Fractional Order Differentiator Using Chebyshev Polynomials

Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh Kumar Pandey

Abstract:

A discrete time fractional orderdifferentiator has been modeled for estimating the fractional order derivatives of contaminated signal. The proposed approach is based on Chebyshev’s polynomials. We use the Riemann-Liouville fractional order derivative definition for designing the fractional order SG differentiator. In first step we calculate the window weight corresponding to the required fractional order. Then signal is convoluted with this calculated window’s weight for finding the fractional order derivatives of signals. Several signals are considered for evaluating the accuracy of the proposed method.

Keywords: fractional order derivative, chebyshev polynomials, signals, S-G differentiator

Procedia PDF Downloads 645
2690 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 54
2689 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,

Procedia PDF Downloads 265
2688 Health Economics in the Cost-Benefit Analysis of Transport Schemes

Authors: Henry Kelly, Helena Shaw

Abstract:

This paper will seek how innovative methods from Health Economics and, to a lesser extent, wellbeing analysis can be applied in the Cost-Benefit Analysis (CBA) of transport infrastructure and policy interventions. The context for this will focus on the framework articulated by the UK Treasury (finance department) and the English Department for Transport. Both have well-established methods for undertaking CBA, but there is increased policy interest, particularly at a regional level of exploring broader strategic goals beyond those traditionally associated with transport user benefits, productivity gains, and labour market access. Links to different CBA approaches internationally, such as New Zealand, France, and Wales will be referenced. By exploring a complementary method of accessing the impacts of policies through the quantification of health impacts is a fruitful line to explore. In a previous piece of work, 14 impact pathways were identified, mapping the relationship between transport and health. These are wide-ranging, from improved employment prospects, the stress of unreliable journey times, and air quality to isolation and loneliness. Importantly, we will consider these different measures of health from an intersectional point of view to ensure that the basis that remains in the health industry does not get translated across to this work. The objective is to explore how a CBA based on these pathways may, through quantifying forecast impacts in terms of Quality-Adjusted Life Years may, produce different findings than a standard approach. Of particular interest is how a health-based approach may have different distributional impacts on socio-economic groups and may favour distinct types of interventions. Consideration will be given to the degree this approach may double-count impacts or if it is possible to identify additional benefits to the established CBA approach. The investigation will explore a range of schemes, from a high-speed rail link, highway improvements, rural mobility hubs, and coach services to cycle lanes. The conclusions should aid the progression of methods concerning the assessment of publicly funded infrastructure projects.

Keywords: cost-benefit analysis, health, QALYs transport

Procedia PDF Downloads 75
2687 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 367
2686 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom

Authors: Michael Hast

Abstract:

Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.

Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight

Procedia PDF Downloads 187
2685 Calculation of Lungs Physiological Lung Motion in External Lung Irradiation

Authors: Yousif Mohamed Y. Abdallah, Khalid H. Eltom

Abstract:

This is an experimental study deals with measurement of the periodic physiological organ motion during lung external irradiation in order to reduce the exposure of healthy tissue during radiation treatments. The results showed for left lung displacement reading (4.52+1.99 mm) and right lung is (8.21+3.77 mm) which the radiotherapy physician should take suitable countermeasures in case of significant errors. The motion ranged between 2.13 mm and 12.2 mm (low and high). In conclusion, the calculation of tumour mobility can improve the accuracy of target areas definition in patients undergo Sterostatic RT for stage I, II and III lung cancer (NSCLC). Definition of the target volume based on a high resolution CT scan with a margin of 3-5 mm is appropriate.

Keywords: physiological motion, lung, external irradiation, radiation medicine

Procedia PDF Downloads 413
2684 On-Road Text Detection Platform for Driver Assistance Systems

Authors: Guezouli Larbi, Belkacem Soundes

Abstract:

The automation of the text detection process can help the human in his driving task. Its application can be very useful to help drivers to have more information about their environment by facilitating the reading of road signs such as directional signs, events, stores, etc. In this paper, a system consisting of two stages has been proposed. In the first one, we used pseudo-Zernike moments to pinpoint areas of the image that may contain text. The architecture of this part is based on three main steps, region of interest (ROI) detection, text localization, and non-text region filtering. Then, in the second step, we present a convolutional neural network architecture (On-Road Text Detection Network - ORTDN) which is considered a classification phase. The results show that the proposed framework achieved ≈ 35 fps and an mAP of ≈ 90%, thus a low computational time with competitive accuracy.

Keywords: text detection, CNN, PZM, deep learning

Procedia PDF Downloads 79
2683 Effect of Chemical Modification of Functional Groups on Copper(II) Biosorption by Brown Marine Macroalgae Ascophyllum nodosum

Authors: Luciana P. Mazur, Tatiana A. Pozdniakova, Rui A. R. Boaventura, Vitor J. P. Vilar

Abstract:

The principal mechanism of metal ions sequestration by brown algae involves the formation of complexes between the metal ion and functional groups present on the cell wall of the biological material. To understand the role of functional groups on copper(II) uptake by Ascophyllum nodosum, some functional groups were chemically modified. The esterification of carboxylic groups was carried out by suspending the biomass in a methanol/HCl solution under stirring for 48 h and the blocking of the sulfonic groups was performed by repeating the same procedure for 4 cycles of 48 h. The methylation of amines was conducted by suspending the biomass in a formaldehyde/formic acid solution under shaking for 6 h and the chemical modification of sulfhydryl groups on the biomass surface was achieved using dithiodipyridine for 1 h. Equilibrium sorption studies for Cu2+ using the raw and esterified algae were performed at pH 2.0 and 4.0. The experiments were performed using an initial copper concentration of 300 mg/L and algae dose of 1.0 g/L. After reaching the equilibrium, the metal in solution was quantified by atomic absorption spectrometry. The biological material was analyzed by Fourier Transform Infrared Spectroscopy and Potentiometric Titration techniques for functional groups identification and quantification, respectively. The results using unmodified algae showed that the maximum copper uptake capacity at pH 4.0 and 2.0 was 1.17 and 0.52 mmol/g, respectively. At acidic pH values most carboxyl groups are protonated and copper sorption suffered a significant reduction of 56%. Blocking the carboxylic, sulfonic, amines and sulfhydryl functional groups, copper uptake decreased by 24/26%, 69/81%, 1/23% and 40/27% at pH 2.0/4.0, respectively, when compared to the unmodified biomass. It was possible to conclude that the carboxylic and sulfonic groups are the main functional groups responsible for copper binding (>80%). This result is supported by the fact that the adsorption capacity is directly related to the presence of carboxylic groups of the alginate polymer, and the second most abundant acidic functional group in brown algae is the sulfonic acid of fucoidan that contributes, to a lower extent, to heavy metal binding, particularly at low pH.

Keywords: biosorption, brown marine macroalgae, copper, ion-exchange

Procedia PDF Downloads 319
2682 Curve Fitting by Cubic Bezier Curves Using Migrating Birds Optimization Algorithm

Authors: Mitat Uysal

Abstract:

A new met heuristic optimization algorithm called as Migrating Birds Optimization is used for curve fitting by rational cubic Bezier Curves. This requires solving a complicated multivariate optimization problem. In this study, the solution of this optimization problem is achieved by Migrating Birds Optimization algorithm that is a powerful met heuristic nature-inspired algorithm well appropriate for optimization. The results of this study show that the proposed method performs very well and being able to fit the data points to cubic Bezier Curves with a high degree of accuracy.

Keywords: algorithms, Bezier curves, heuristic optimization, migrating birds optimization

Procedia PDF Downloads 331
2681 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 175
2680 Engineering Optimization of Flexible Energy Absorbers

Authors: Reza Hedayati, Meysam Jahanbakhshi

Abstract:

Elastic energy absorbers which consist of a ring-liked plate and springs can be a good choice for increasing the impact duration during an accident. In the current project, an energy absorber system is optimized using four optimizing methods Kuhn-Tucker, Sequential Linear Programming (SLP), Concurrent Subspace Design (CSD), and Pshenichny-Lim-Belegundu-Arora (PLBA). Time solution, convergence, Programming Length and accuracy of the results were considered to find the best solution algorithm. Results showed the superiority of PLBA over the other algorithms.

Keywords: Concurrent Subspace Design (CSD), Kuhn-Tucker, Pshenichny-Lim-Belegundu-Arora (PLBA), Sequential Linear Programming (SLP)

Procedia PDF Downloads 392
2679 Detection of Chaos in General Parametric Model of Infectious Disease

Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari

Abstract:

Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.

Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test

Procedia PDF Downloads 321
2678 Household Solid Waste Generation per Capita and Management Behaviour in Mthatha City, South Africa

Authors: Vuyayo Tsheleza, Simbarashe Ndhleve, Christopher Mpundu Musampa

Abstract:

Mismanagement of waste is continuously emerging as a rising malpractice in most developing countries, especially in fast growing cities. Household solid waste in Mthatha has been reported to be one of the problems facing the city and is overwhelming local authorities, as it is beyond the environment and management capacity of the existing waste management system. This study estimates per capita waste generation, quantity of different waste types generated by inhabitants of formal and informal settlements in Mthatha as well as waste management practices in the aforementioned socio-economic stratums. A total of 206 households were systematically selected for the study using stratified random sampling categorized into formal and informal settlements. Data on household waste generation rate, composition, awareness, and household waste management behaviour and practices was gathered through mixed methods. Sampled households from both formal and informal settlements with a total of 684 people generated 1949kg per week. This translates to 2.84kg per capita per week. On average, the rate of solid waste generation per capita was 0.40 kg per day for a person living in informal settlement and 0.56 kg per day person living in formal settlement. When recorded in descending order, the proportion food waste accounted for the most generated waste at approximately 23.7%, followed by disposable nappies at 15%, papers and cardboards 13.34%, glass 13.03%, metals at 11.99%, plastics at 11.58%, residue at 5.17, textiles 3.93%, with leather and rubber at 2.28% as the least generated waste type. Different waste management practices were reported in both formal and informal settlements with formal settlements proving to be more concerned about environmental management as compared to their counterparts, informal settlement. Understanding attitudes and perceptions on waste management, waste types and per capita solid waste generation rate can help evolve appropriate waste management strategies based on the principle of reduce, re-use, recycle, environmental sound disposal and also assist in projecting future waste generation rate. These results can be utilized as input when designing growing cities’ waste management plans.

Keywords: awareness, characterisation, per capita, quantification

Procedia PDF Downloads 293
2677 Camera Model Identification for Mi Pad 4, Oppo A37f, Samsung M20, and Oppo f9

Authors: Ulrich Wake, Eniman Syamsuddin

Abstract:

The model for camera model identificaiton is trained using pretrained model ResNet43 and ResNet50. The dataset consists of 500 photos of each phone. Dataset is divided into 1280 photos for training, 320 photos for validation and 400 photos for testing. The model is trained using One Cycle Policy Method and tested using Test-Time Augmentation. Furthermore, the model is trained for 50 epoch using regularization such as drop out and early stopping. The result is 90% accuracy for validation set and above 85% for Test-Time Augmentation using ResNet50. Every model is also trained by slightly updating the pretrained model’s weights

Keywords: ​ One Cycle Policy, ResNet34, ResNet50, Test-Time Agumentation

Procedia PDF Downloads 200
2676 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 207
2675 An Introduction to the Concept of Environmental Audit: Indian Context

Authors: Pradip Kumar Das

Abstract:

Phenomenal growth of population and industry exploits the environment in varied ways. Consequently, the greenhouse effect and other allied problems are threatening mankind the world over. Protection and up gradation of environment have, therefore, become the prime necessity all of mankind for the sustainable development of environment. People in humbler walks of life including the corporate citizens have become aware of the impacts of environmental pollution. Governments of various nations have entered the picture with laws and regulations to correct and cure the effects of present and past violations of environmental practices and to obstruct future violations of good environmental disciplines. In this perspective, environmental audit directs verification and validation to ensure that the various environmental laws are complied with and adequate care has been taken towards environmental protection and preservation. The discipline of environmental audit has experienced expressive development throughout the world. It examines the positive and negative effects of the activities of an enterprise on environment and provides an in-depth study of the company processes any growth in realizing long-term strategic goals. Environmental audit helps corporations assess its achievement, correct deficiencies and reduce risk to the health and improving safety. Environmental audit being a strong management tool should be administered by industry for its own self-assessment. Developed countries all over the globe have gone ahead in environment quantification; but unfortunately, there is a lack of awareness about pollution and environmental hazards among the common people in India. In the light of this situation, the conceptual analysis of this study is concerned with the rationale of environmental audit on the industry and the society as a whole and highlights the emerging dimensions in the auditing theory and practices. A modest attempt has been made to throw light on the recent development in environmental audit in developing nations like India and the problems associated with the implementation of environmental audit. The conceptual study also reflects that despite different obstacles, environmental audit is becoming an increasing aspect within the corporate sectors in India and lastly, conclusions along with suggestions have been offered to improve the current scenario.

Keywords: environmental audit, environmental hazards, environmental laws, environmental protection, environmental preservation

Procedia PDF Downloads 268
2674 Identification of the Expression of Top Deregulated MiRNAs in Rheumatoid Arthritis and Osteoarthritis

Authors: Hala Raslan, Noha Eltaweel, Hanaa Rasmi, Solaf Kamel, May Magdy, Sherif Ismail, Khalda Amr

Abstract:

Introduction: Rheumatoid arthritis (RA) is an inflammatory, autoimmune disorder with progressive joint damage. Osteoarthritis (OA) is a degenerative disease of the articular cartilage that shows multiple clinical manifestations or symptoms resembling those of RA. Genetic predisposition is believed to be a principal etiological factor for RA and OA. In this study, we aimed to measure the expression of the top deregulated miRNAs that might be the cause of pathogenesis in both diseases, according to our latest NGS analysis. Six of the deregulated miRNAs were selected as they had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis.Methods: Eighty cases were recruited in this study; 45 rheumatoid arthiritis (RA), 30 osteoarthiritis (OA) patients, as well as 20 healthy controls. The selection of the miRNAs from our latest NGS study was done using miRwalk according to the number of their target genes that are members in the KEGG RA pathway. Total RNA was isolated from plasma of all recruited cases. The cDNA was generated by the miRcury RT Kit then used as a template for real-time PCR with miRcury Primer Assays and the miRcury SYBR Green PCR Kit. Fold changes were calculated from CT values using the ΔΔCT method of relative quantification. Results were compared RA vs Controls and OA vs Controls. Target gene prediction and functional annotation of the deregulated miRNAs was done using Mienturnet. Results: Six miRNAs were selected. They were miR-15b-3p, -128-3p, -194-3p, -328-3p, -542-3p and -3180-5p. In RA samples, three of the measured miRNAs were upregulated (miR-194, -542, and -3180; mean Rq= 2.6, 3.8 and 8.05; P-value= 0.07, 0.05 and 0.01; respectively) while the remaining 3 were downregulated (miR-15b, -128 and -328; mean Rq= 0.21, 0.39 and 0.6; P-value= <0.0001, <0.0001 and 0.02; respectively) all with high statistical significance except miR-194. While in OA samples, two of the measured miRNAs were upregulated (miR-194 and -3180; mean Rq= 2.6 and 7.7; P-value= 0.1 and 0.03; respectively) while the remaining 4 were downregulated (miR-15b, -128, -328 and -542; mean Rq= 0.5, 0.03, 0.08 and 0.5; P-value= 0.0008, 0.003, 0.006 and 0.4; respectively) with statistical significance compared to controls except miR-194 and miR-542. The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Conclusion: Five of the studied miRNAs were greatly deregulated in RA and OA, they might be highly involved in the disease pathogenesis and so might be future therapeutic targets. Further functional studies are crucial to assess their roles and actual target genes.

Keywords: MiRNAs, expression, rheumatoid arthritis, osteoarthritis

Procedia PDF Downloads 74
2673 Modeling and Validation of Microspheres Generation in the Modified T-Junction Device

Authors: Lei Lei, Hongbo Zhang, Donald J. Bergstrom, Bing Zhang, K. Y. Song, W. J. Zhang

Abstract:

This paper presents a model for a modified T-junction device for microspheres generation. The numerical model is developed using a commercial software package: COMSOL Multiphysics. In order to test the accuracy of the numerical model, multiple variables, such as the flow rate of cross-flow, fluid properties, structure, and geometry of the microdevice are applied. The results from the model are compared with the experimental results in the diameter of the microsphere generated. The comparison shows a good agreement. Therefore the model is useful in further optimization of the device and feedback control of microsphere generation if any.

Keywords: CFD modeling, validation, microsphere generation, modified T-junction

Procedia PDF Downloads 697
2672 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 52
2671 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 118