Search results for: dimensional accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5684

Search results for: dimensional accuracy

704 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management

Authors: Shohreh Ghasemi

Abstract:

Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial trauma

Keywords: trauma, machine learning, navigation, maxillofacial, management

Procedia PDF Downloads 58
703 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods

Authors: Matthew D. Baffa

Abstract:

Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.

Keywords: emissivity, heat loss, infrared thermography, thermal conductance

Procedia PDF Downloads 313
702 Broad Survey of Fine Root Traits to Investigate the Root Economic Spectrum Hypothesis and Plant-Fire Dynamics Worldwide

Authors: Jacob Lewis Watts, Adam F. A. Pellegrini

Abstract:

Prairies, grasslands, and forests cover an expansive portion of the world’s surface and contribute significantly to Earth’s carbon cycle. The largest driver of carbon dynamics in some of these ecosystems is fire. As the global climate changes, most fire-dominated ecosystems will experience increased fire frequency and intensity, leading to increased carbon flux into the atmosphere and soil nutrient depletion. The plant communities associated with different fire regimes are important for reassimilation of carbon lost during fire and soil recovery. More frequent fires promote conservative plant functional traits aboveground; however, belowground fine root traits are poorly explored and arguably more important drivers of ecosystem function as the primary interface between the soil and plant. The root economic spectrum (RES) hypothesis describes single-dimensional covariation between important fine-root traits along a range of plant strategies from acquisitive to conservative – parallel to the well-established leaf economic spectrum (LES). However, because of the paucity of root trait data, the complex nature of the rhizosphere, and the phylogenetic conservatism of root traits, it is unknown whether the RES hypothesis accurately describes plant nutrient and water acquisition strategies. This project utilizesplants grown in common garden conditions in the Cambridge University Botanic Garden and a meta-analysis of long-term fire manipulation experiments to examine the belowground physiological traits of fire-adapted and non-fire-adapted herbaceous species to 1) test the RES hypothesis and 2) describe the effect of fire regimes on fine root functional traits – which in turn affect carbon and nutrient cycling. A suite of morphological, chemical, and biological root traits (e.g. root diameter, specific root length, percent N, percent mycorrhizal colonization, etc.) of 50 herbaceous species were measuredand tested for phylogenetic conservatism and RES dimensionality. Fire-adapted and non-fire-adapted plants traits were compared using phylogenetic PCA techniques. Preliminary evidence suggests that phylogenetic conservatism may weaken the single-dimensionality of the RES, suggesting that there may not be a single way that plants optimize nutrient and water acquisition and storage in the complex rhizosphere; additionally, fire-adapted species are expected to be more conservative than non-fire-adapted species, which may be indicative of slower carbon cycling with increasing fire frequency and intensity.

Keywords: climate change, fire regimes, root economic spectrum, fine roots

Procedia PDF Downloads 124
701 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 113
700 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia

Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono

Abstract:

Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.

Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length

Procedia PDF Downloads 215
699 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour

Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo

Abstract:

The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².

Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River

Procedia PDF Downloads 457
698 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 385
697 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids

Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout

Abstract:

Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.

Keywords: graphene, layered material, field emission, plasma, doping

Procedia PDF Downloads 361
696 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 163
695 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 145
694 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 170
693 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 145
692 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context

Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez

Abstract:

Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.

Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes

Procedia PDF Downloads 60
691 Simulation of Scaled Model of Tall Multistory Structure: Raft Foundation for Experimental and Numerical Dynamic Studies

Authors: Omar Qaftan

Abstract:

Earthquakes can cause tremendous loss of human life and can result in severe damage to a several of civil engineering structures especially the tall buildings. The response of a multistory structure subjected to earthquake loading is a complex task, and it requires to be studied by physical and numerical modelling. For many circumstances, the scale models on shaking table may be a more economical option than the similar full-scale tests. A shaking table apparatus is a powerful tool that offers a possibility of understanding the actual behaviour of structural systems under earthquake loading. It is required to use a set of scaling relations to predict the behaviour of the full-scale structure. Selecting the scale factors is the most important steps in the simulation of the prototype into the scaled model. In this paper, the principles of scaling modelling procedure are explained in details, and the simulation of scaled multi-storey concrete structure for dynamic studies is investigated. A procedure for a complete dynamic simulation analysis is investigated experimentally and numerically with a scale factor of 1/50. The frequency domain accounting and lateral displacement for both numerical and experimental scaled models are determined. The procedure allows accounting for the actual dynamic behave of actual size porotype structure and scaled model. The procedure is adapted to determine the effects of the tall multi-storey structure on a raft foundation. Four generated accelerograms were used as inputs for the time history motions which are in complying with EC8. The output results of experimental works expressed regarding displacements and accelerations are compared with those obtained from a conventional fixed-base numerical model. Four-time history was applied in both experimental and numerical models, and they concluded that the experimental has an acceptable output accuracy in compare with the numerical model output. Therefore this modelling methodology is valid and qualified for different shaking table experiments tests.

Keywords: structure, raft, soil, interaction

Procedia PDF Downloads 136
690 Comparison of 18F-FDG and 11C-Methionine PET-CT for Assessment of Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Carcinoma

Authors: Sonia Mahajan Dinesh, Anant Dinesh, Madhavi Tripathi, Vinod Kumar Ramteke, Rajnish Sharma, Anupam Mondal

Abstract:

Background: Neo-adjuvant chemotherapy plays an important role in treatment of breast cancer by decreasing the tumour load and it offers an opportunity to evaluate response of primary tumour to chemotherapy. Standard anatomical imaging modalities are unable to accurately reflect the response to chemotherapy until several cycles of drug treatment have been completed. Metabolic imaging using tracers like 18F-fluorodeoxyglucose (FDG) as a marker of glucose metabolism or amino acid tracers like L-methyl-11C methionine (MET) have potential role for the measurement of treatment response. In this study, our objective was to compare these two PET tracers for assessment of response to neoadjuvant chemotherapy, in locally advanced breast carcinoma. Methods: In our prospective study, 20 female patients with histology proven locally advanced breast carcinoma underwent PET-CT imaging using FDG and MET before and after three cycles of neoadjuvant chemotherapy (CAF regimen). Thereafter, all patients were taken for MRM and the resected specimen was sent for histo-pathological analysis. Tumour response to the neoadjuvant chemotherapy was evaluated by PET-CT imaging using PERCIST criteria and correlated with histological results. Responses calculated were compared for statistical significance using paired t- test. Results: Mean SUVmax for primary lesion in FDG PET and MET PET was 15.88±11.12 and 5.01±2.14 respectively (p<0.001) and for axillary lymph nodes was 7.61±7.31 and 2.75±2.27 respectively (p=0.001). Statistically significant response in primary tumour and axilla was noted on both FDG and MET PET after three cycles of NAC. Complete response in primary tumour was seen in only 1 patient in FDG and 7 patients in MET PET (p=0.001) whereas there was no histological complete resolution of tumor in any patient. Response to therapy in axillary nodes noted on both PET scans were similar (p=0.45) and correlated well with histological findings. Conclusions: For the primary breast tumour, FDG PET has a higher sensitivity and accuracy than MET PET and for axilla both have comparable sensitivity and specificity. FDG PET shows higher target to background ratios so response is better predicted for primary breast tumour and axilla. Also, FDG-PET is widely available and has the advantage of a whole body evaluation in one study.

Keywords: 11C-methionine, 18F-FDG, breast carcinoma, neoadjuvant chemotherapy

Procedia PDF Downloads 510
689 Development of Pothole Management Method Using Automated Equipment with Multi-Beam Sensor

Authors: Sungho Kim, Jaechoul Shin, Yujin Baek, Nakseok Kim, Kyungnam Kim, Shinhaeng Jo

Abstract:

The climate change and increase in heavy traffic have been accelerating damages that cause the problems such as pothole on asphalt pavement. Pothole causes traffic accidents, vehicle damages, road casualties and traffic congestion. A quick and efficient maintenance method is needed because pothole is caused by stripping and accelerates pavement distress. In this study, we propose a rapid and systematic pothole management by developing a pothole automated repairing equipment including a volume measurement system of pothole. Three kinds of cold mix asphalt mixture were investigated to select repair materials. The materials were evaluated for satisfaction with quality standard and applicability to automated equipment. The volume measurement system of potholes was composed of multi-sensor that are combined with laser sensor and ultrasonic sensor and installed in front and side of the automated repair equipment. An algorithm was proposed to calculate the amount of repair material according to the measured pothole volume, and the system for releasing the correct amount of material was developed. Field test results showed that the loss of repair material amount could be reduced from approximately 20% to 6% per one point of pothole. Pothole rapid automated repair equipment will contribute to improvement on quality and efficient and economical maintenance by not only reducing materials and resources but also calculating appropriate materials. Through field application, it is possible to improve the accuracy of pothole volume measurement, to correct the calculation of material amount, and to manage the pothole data of roads, thereby enabling more efficient pavement maintenance management. Acknowledgment: The author would like to thank the MOLIT(Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.

Keywords: automated equipment, management, multi-beam sensor, pothole

Procedia PDF Downloads 224
688 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques

Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan

Abstract:

A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.

Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle

Procedia PDF Downloads 320
687 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution

Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi

Abstract:

Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.

Keywords: adsorption, hydrogel, nanocomposite, super adsorbent

Procedia PDF Downloads 189
686 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 369
685 A Corpus Study of English Verbs in Chinese EFL Learners’ Academic Writing Abstracts

Authors: Shuaili Ji

Abstract:

The correct use of verbs is an important element of high-quality research articles, and thus for Chinese EFL learners, it is significant to master characteristics of verbs and to precisely use verbs. However, some researches have shown that there are differences in using verbs between learners and native speakers and learners have difficulty in using English verbs. This corpus-based quantitative research can enhance learners’ knowledge of English verbs and promote the quality of research article abstracts even of the whole academic writing. The aim of this study is to find the differences between learners’ and native speakers’ use of verbs and to study the factors that contribute to those differences. To this end, the research question is as follows: What are the differences between most frequently used verbs by learners and those by native speakers? The research question is answered through a study that uses corpus-based data-driven approach to analyze the verbs used by learners in their abstract writings in terms of collocation, colligation and semantic prosody. The results show that: (1) EFL learners obviously overused ‘be, can, find, make’ and underused ‘investigate, examine, may’. As to modal verbs, learners obviously overused ‘can’ while underused ‘may’. (2) Learners obviously overused ‘we find + object clauses’ while underused ‘nouns (results, findings, data) + suggest/indicate/reveal + object clauses’ when expressing research results. (3) Learners tended to transfer the collocation, colligation and semantic prosody of shǐ and zuò to make. (4) Learners obviously overused ‘BE+V-ed’ and used BE as the main verb. They also obviously overused the basic forms of BE such as be, is, are, while obviously underused its inflections (was, were). These results manifested learners’ lack of accuracy and idiomatic property in verb usage. Due to the influence of the concept transfer of Chinese, the verbs in learners’ abstracts showed obvious transfer of mother language. In addition, learners have not fully mastered the use of verbs, avoiding using complex colligations to prevent errors. Based on these findings, the present study has implications for English teaching, seeking to have implications for English academic abstract writing in China. Further research could be undertaken to study the use of verbs in the whole dissertation to find out whether the characteristic of the verbs in abstracts can apply in the whole dissertation or not.

Keywords: academic writing abstracts, Chinese EFL learners, corpus-based, data-driven, verbs

Procedia PDF Downloads 335
684 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 276
683 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 74
682 Assessment of Environmental Quality of an Urban Setting

Authors: Namrata Khatri

Abstract:

The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.

Keywords: environmental quality, UEQ, remote sensing, GIS

Procedia PDF Downloads 81
681 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English

Authors: Duong Thuy Nguyen, Giulia Bencini

Abstract:

The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.

Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing

Procedia PDF Downloads 152
680 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 170
679 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 339
678 Selective Effect of Occipital Alpha Transcranial Alternating Current Stimulation in Perception and Working Memory

Authors: Andreina Giustiniani, Massimiliano Oliveri

Abstract:

Rhythmic activity in different frequencies could subserve distinct functional roles during visual perception and visual mental imagery. In particular, alpha band activity is thought to play a role in active inhibition of both task-irrelevant regions and processing of non-relevant information. In the present blind placebo-controlled study we applied alpha transcranial alternating current stimulation (tACS) in the occipital cortex both during a basic visual perception and a visual working memory task. To understand if the role of alpha is more related to a general inhibition of distractors or to an inhibition of task-irrelevant regions, we added a non visual distraction to both the tasks.Sixteen adult volunteers performed both a simple perception and a working memory task during 10 Hz tACS. The electrodes were placed over the left and right occipital cortex, the current intensity was 1 mA peak-to-baseline. Sham stimulation was chosen as control condition and in order to elicit the skin sensation similar to the real stimulation, electrical stimulation was applied for short periods (30 s) at the beginning of the session and then turned off. The tasks were split in two sets, in one set distracters were included and in the other set, there were no distracters. Motor interference was added by changing the answer key after subjects completed the first set of trials.The results show that alpha tACS improves working memory only when no motor distracters are added, suggesting a role of alpha tACS in inhibiting non-relevant regions rather than in a general inhibition of distractors. Additionally, we found that alpha tACS does not affect accuracy and hit rates during the visual perception task. These results suggest that alpha activity in the occipital cortex plays a different role in perception and working memory and it could optimize performance in tasks in which attention is internally directed, as in this working memory paradigm, but only when there is not motor distraction. Moreover, alpha tACS improves working memory performance by means of inhibition of task-irrelevant regions while it does not affect perception.

Keywords: alpha activity, interference, perception, working memory

Procedia PDF Downloads 258
677 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 102
676 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 130
675 Effects of Partial Sleep Deprivation on Prefrontal Cognitive Functions in Adolescents

Authors: Nurcihan Kiris

Abstract:

Restricted sleep is common in young adults and adolescents. The results of a few objective studies of sleep deprivation on cognitive performance were not clarified. In particular, the effect of sleep deprivation on the cognitive functions associated with frontal lobe such as attention, executive functions, working memory is not well known. The aim of this study is to investigate the effect of partial sleep deprivation experimentally in adolescents on the cognitive tasks of frontal lobe including working memory, strategic thinking, simple attention, continuous attention, executive functions, and cognitive flexibility. Subjects of the study were recruited from voluntary students of Cukurova University. Eighteen adolescents underwent four consecutive nights of monitored sleep restriction (6–6.5 hr/night) and four nights of sleep extension (10–10.5 hr/night), in counterbalanced order, and separated by a washout period. Following each sleep period, cognitive performance was assessed, at a fixed morning time, using a computerized neuropsychological battery based on frontal lobe functions task, a timed test providing both accuracy and reaction time outcome measures. Only the spatial working memory performance of cognitive tasks was found to be statistically lower in a restricted sleep condition than the extended sleep condition. On the other hand, there was no significant difference in the performance of cognitive tasks evaluating simple attention, constant attention, executive functions, and cognitive flexibility. It is thought that especially the spatial working memory and strategic thinking skills of adolescents may be susceptible to sleep deprivation. On the other hand, adolescents are predicted to be optimally successful in ideal sleep conditions, especially in the circumstances requiring for the short term storage of visual information, processing of stored information, and strategic thinking. The findings of this study may also be associated with possible negative functional effects on the processing of academic social and emotional inputs in adolescents for partial sleep deprivation. Acknowledgment: This research was supported by Cukurova University Scientific Research Projects Unit.

Keywords: attention, cognitive functions, sleep deprivation, working memory

Procedia PDF Downloads 159