Search results for: empathic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3636

Search results for: empathic accuracy

546 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour

Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo

Abstract:

The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².

Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River

Procedia PDF Downloads 443
545 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 375
544 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 138
543 Simulation of Scaled Model of Tall Multistory Structure: Raft Foundation for Experimental and Numerical Dynamic Studies

Authors: Omar Qaftan

Abstract:

Earthquakes can cause tremendous loss of human life and can result in severe damage to a several of civil engineering structures especially the tall buildings. The response of a multistory structure subjected to earthquake loading is a complex task, and it requires to be studied by physical and numerical modelling. For many circumstances, the scale models on shaking table may be a more economical option than the similar full-scale tests. A shaking table apparatus is a powerful tool that offers a possibility of understanding the actual behaviour of structural systems under earthquake loading. It is required to use a set of scaling relations to predict the behaviour of the full-scale structure. Selecting the scale factors is the most important steps in the simulation of the prototype into the scaled model. In this paper, the principles of scaling modelling procedure are explained in details, and the simulation of scaled multi-storey concrete structure for dynamic studies is investigated. A procedure for a complete dynamic simulation analysis is investigated experimentally and numerically with a scale factor of 1/50. The frequency domain accounting and lateral displacement for both numerical and experimental scaled models are determined. The procedure allows accounting for the actual dynamic behave of actual size porotype structure and scaled model. The procedure is adapted to determine the effects of the tall multi-storey structure on a raft foundation. Four generated accelerograms were used as inputs for the time history motions which are in complying with EC8. The output results of experimental works expressed regarding displacements and accelerations are compared with those obtained from a conventional fixed-base numerical model. Four-time history was applied in both experimental and numerical models, and they concluded that the experimental has an acceptable output accuracy in compare with the numerical model output. Therefore this modelling methodology is valid and qualified for different shaking table experiments tests.

Keywords: structure, raft, soil, interaction

Procedia PDF Downloads 127
542 Comparison of 18F-FDG and 11C-Methionine PET-CT for Assessment of Response to Neoadjuvant Chemotherapy in Locally Advanced Breast Carcinoma

Authors: Sonia Mahajan Dinesh, Anant Dinesh, Madhavi Tripathi, Vinod Kumar Ramteke, Rajnish Sharma, Anupam Mondal

Abstract:

Background: Neo-adjuvant chemotherapy plays an important role in treatment of breast cancer by decreasing the tumour load and it offers an opportunity to evaluate response of primary tumour to chemotherapy. Standard anatomical imaging modalities are unable to accurately reflect the response to chemotherapy until several cycles of drug treatment have been completed. Metabolic imaging using tracers like 18F-fluorodeoxyglucose (FDG) as a marker of glucose metabolism or amino acid tracers like L-methyl-11C methionine (MET) have potential role for the measurement of treatment response. In this study, our objective was to compare these two PET tracers for assessment of response to neoadjuvant chemotherapy, in locally advanced breast carcinoma. Methods: In our prospective study, 20 female patients with histology proven locally advanced breast carcinoma underwent PET-CT imaging using FDG and MET before and after three cycles of neoadjuvant chemotherapy (CAF regimen). Thereafter, all patients were taken for MRM and the resected specimen was sent for histo-pathological analysis. Tumour response to the neoadjuvant chemotherapy was evaluated by PET-CT imaging using PERCIST criteria and correlated with histological results. Responses calculated were compared for statistical significance using paired t- test. Results: Mean SUVmax for primary lesion in FDG PET and MET PET was 15.88±11.12 and 5.01±2.14 respectively (p<0.001) and for axillary lymph nodes was 7.61±7.31 and 2.75±2.27 respectively (p=0.001). Statistically significant response in primary tumour and axilla was noted on both FDG and MET PET after three cycles of NAC. Complete response in primary tumour was seen in only 1 patient in FDG and 7 patients in MET PET (p=0.001) whereas there was no histological complete resolution of tumor in any patient. Response to therapy in axillary nodes noted on both PET scans were similar (p=0.45) and correlated well with histological findings. Conclusions: For the primary breast tumour, FDG PET has a higher sensitivity and accuracy than MET PET and for axilla both have comparable sensitivity and specificity. FDG PET shows higher target to background ratios so response is better predicted for primary breast tumour and axilla. Also, FDG-PET is widely available and has the advantage of a whole body evaluation in one study.

Keywords: 11C-methionine, 18F-FDG, breast carcinoma, neoadjuvant chemotherapy

Procedia PDF Downloads 500
541 Development of Pothole Management Method Using Automated Equipment with Multi-Beam Sensor

Authors: Sungho Kim, Jaechoul Shin, Yujin Baek, Nakseok Kim, Kyungnam Kim, Shinhaeng Jo

Abstract:

The climate change and increase in heavy traffic have been accelerating damages that cause the problems such as pothole on asphalt pavement. Pothole causes traffic accidents, vehicle damages, road casualties and traffic congestion. A quick and efficient maintenance method is needed because pothole is caused by stripping and accelerates pavement distress. In this study, we propose a rapid and systematic pothole management by developing a pothole automated repairing equipment including a volume measurement system of pothole. Three kinds of cold mix asphalt mixture were investigated to select repair materials. The materials were evaluated for satisfaction with quality standard and applicability to automated equipment. The volume measurement system of potholes was composed of multi-sensor that are combined with laser sensor and ultrasonic sensor and installed in front and side of the automated repair equipment. An algorithm was proposed to calculate the amount of repair material according to the measured pothole volume, and the system for releasing the correct amount of material was developed. Field test results showed that the loss of repair material amount could be reduced from approximately 20% to 6% per one point of pothole. Pothole rapid automated repair equipment will contribute to improvement on quality and efficient and economical maintenance by not only reducing materials and resources but also calculating appropriate materials. Through field application, it is possible to improve the accuracy of pothole volume measurement, to correct the calculation of material amount, and to manage the pothole data of roads, thereby enabling more efficient pavement maintenance management. Acknowledgment: The author would like to thank the MOLIT(Ministry of Land, Infrastructure, and Transport). This work was carried out through the project funded by the MOLIT. The project name is 'development of 20mm grade for road surface detecting roadway condition and rapid detection automation system for removal of pothole'.

Keywords: automated equipment, management, multi-beam sensor, pothole

Procedia PDF Downloads 217
540 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques

Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan

Abstract:

A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.

Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle

Procedia PDF Downloads 313
539 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 359
538 A Corpus Study of English Verbs in Chinese EFL Learners’ Academic Writing Abstracts

Authors: Shuaili Ji

Abstract:

The correct use of verbs is an important element of high-quality research articles, and thus for Chinese EFL learners, it is significant to master characteristics of verbs and to precisely use verbs. However, some researches have shown that there are differences in using verbs between learners and native speakers and learners have difficulty in using English verbs. This corpus-based quantitative research can enhance learners’ knowledge of English verbs and promote the quality of research article abstracts even of the whole academic writing. The aim of this study is to find the differences between learners’ and native speakers’ use of verbs and to study the factors that contribute to those differences. To this end, the research question is as follows: What are the differences between most frequently used verbs by learners and those by native speakers? The research question is answered through a study that uses corpus-based data-driven approach to analyze the verbs used by learners in their abstract writings in terms of collocation, colligation and semantic prosody. The results show that: (1) EFL learners obviously overused ‘be, can, find, make’ and underused ‘investigate, examine, may’. As to modal verbs, learners obviously overused ‘can’ while underused ‘may’. (2) Learners obviously overused ‘we find + object clauses’ while underused ‘nouns (results, findings, data) + suggest/indicate/reveal + object clauses’ when expressing research results. (3) Learners tended to transfer the collocation, colligation and semantic prosody of shǐ and zuò to make. (4) Learners obviously overused ‘BE+V-ed’ and used BE as the main verb. They also obviously overused the basic forms of BE such as be, is, are, while obviously underused its inflections (was, were). These results manifested learners’ lack of accuracy and idiomatic property in verb usage. Due to the influence of the concept transfer of Chinese, the verbs in learners’ abstracts showed obvious transfer of mother language. In addition, learners have not fully mastered the use of verbs, avoiding using complex colligations to prevent errors. Based on these findings, the present study has implications for English teaching, seeking to have implications for English academic abstract writing in China. Further research could be undertaken to study the use of verbs in the whole dissertation to find out whether the characteristic of the verbs in abstracts can apply in the whole dissertation or not.

Keywords: academic writing abstracts, Chinese EFL learners, corpus-based, data-driven, verbs

Procedia PDF Downloads 322
537 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 245
536 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri

Authors: Shishay Kidanu, Abdullah Alhaj

Abstract:

Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.

Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri

Procedia PDF Downloads 67
535 Assessment of Environmental Quality of an Urban Setting

Authors: Namrata Khatri

Abstract:

The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.

Keywords: environmental quality, UEQ, remote sensing, GIS

Procedia PDF Downloads 71
534 Memory Retrieval and Implicit Prosody during Reading: Anaphora Resolution by L1 and L2 Speakers of English

Authors: Duong Thuy Nguyen, Giulia Bencini

Abstract:

The present study examined structural and prosodic factors on the computation of antecedent-reflexive relationships and sentence comprehension in native English (L1) and Vietnamese-English bilinguals (L2). Participants read sentences presented on the computer screen in one of three presentation formats aimed at manipulating prosodic parsing: word-by-word (RSVP), phrase-segment (self-paced), or whole-sentence (self-paced), then completed a grammaticality rating and a comprehension task (following Pratt & Fernandez, 2016). The design crossed three factors: syntactic structure (simple; complex), grammaticality (target-match; target-mismatch) and presentation format. An example item is provided in (1): (1) The actress that (Mary/John) interviewed at the awards ceremony (about two years ago/organized outside the theater) described (herself/himself) as an extreme workaholic). Results showed that overall, both L1 and L2 speakers made use of a good-enough processing strategy at the expense of more detailed syntactic analyses. L1 and L2 speakers’ comprehension and grammaticality judgements were negatively affected by the most prosodically disrupting condition (word-by-word). However, the two groups demonstrated differences in their performance in the other two reading conditions. For L1 speakers, the whole-sentence and the phrase-segment formats were both facilitative in the grammaticality rating and comprehension tasks; for L2, compared with the whole-sentence condition, the phrase-segment paradigm did not significantly improve accuracy or comprehension. These findings are consistent with the findings of Pratt & Fernandez (2016), who found a similar pattern of results in the processing of subject-verb agreement relations using the same experimental paradigm and prosodic manipulation with English L1 and L2 English-Spanish speakers. The results provide further support for a Good-Enough cue model of sentence processing that integrates cue-based retrieval and implicit prosodic parsing (Pratt & Fernandez, 2016) and highlights similarities and differences between L1 and L2 sentence processing and comprehension.

Keywords: anaphora resolution, bilingualism, implicit prosody, sentence processing

Procedia PDF Downloads 144
533 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures

Authors: Thanh N. Do

Abstract:

This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.

Keywords: concrete, damage-plasticity, shear wall, confinement

Procedia PDF Downloads 162
532 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 324
531 Selective Effect of Occipital Alpha Transcranial Alternating Current Stimulation in Perception and Working Memory

Authors: Andreina Giustiniani, Massimiliano Oliveri

Abstract:

Rhythmic activity in different frequencies could subserve distinct functional roles during visual perception and visual mental imagery. In particular, alpha band activity is thought to play a role in active inhibition of both task-irrelevant regions and processing of non-relevant information. In the present blind placebo-controlled study we applied alpha transcranial alternating current stimulation (tACS) in the occipital cortex both during a basic visual perception and a visual working memory task. To understand if the role of alpha is more related to a general inhibition of distractors or to an inhibition of task-irrelevant regions, we added a non visual distraction to both the tasks.Sixteen adult volunteers performed both a simple perception and a working memory task during 10 Hz tACS. The electrodes were placed over the left and right occipital cortex, the current intensity was 1 mA peak-to-baseline. Sham stimulation was chosen as control condition and in order to elicit the skin sensation similar to the real stimulation, electrical stimulation was applied for short periods (30 s) at the beginning of the session and then turned off. The tasks were split in two sets, in one set distracters were included and in the other set, there were no distracters. Motor interference was added by changing the answer key after subjects completed the first set of trials.The results show that alpha tACS improves working memory only when no motor distracters are added, suggesting a role of alpha tACS in inhibiting non-relevant regions rather than in a general inhibition of distractors. Additionally, we found that alpha tACS does not affect accuracy and hit rates during the visual perception task. These results suggest that alpha activity in the occipital cortex plays a different role in perception and working memory and it could optimize performance in tasks in which attention is internally directed, as in this working memory paradigm, but only when there is not motor distraction. Moreover, alpha tACS improves working memory performance by means of inhibition of task-irrelevant regions while it does not affect perception.

Keywords: alpha activity, interference, perception, working memory

Procedia PDF Downloads 241
530 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 89
529 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 119
528 Effects of Partial Sleep Deprivation on Prefrontal Cognitive Functions in Adolescents

Authors: Nurcihan Kiris

Abstract:

Restricted sleep is common in young adults and adolescents. The results of a few objective studies of sleep deprivation on cognitive performance were not clarified. In particular, the effect of sleep deprivation on the cognitive functions associated with frontal lobe such as attention, executive functions, working memory is not well known. The aim of this study is to investigate the effect of partial sleep deprivation experimentally in adolescents on the cognitive tasks of frontal lobe including working memory, strategic thinking, simple attention, continuous attention, executive functions, and cognitive flexibility. Subjects of the study were recruited from voluntary students of Cukurova University. Eighteen adolescents underwent four consecutive nights of monitored sleep restriction (6–6.5 hr/night) and four nights of sleep extension (10–10.5 hr/night), in counterbalanced order, and separated by a washout period. Following each sleep period, cognitive performance was assessed, at a fixed morning time, using a computerized neuropsychological battery based on frontal lobe functions task, a timed test providing both accuracy and reaction time outcome measures. Only the spatial working memory performance of cognitive tasks was found to be statistically lower in a restricted sleep condition than the extended sleep condition. On the other hand, there was no significant difference in the performance of cognitive tasks evaluating simple attention, constant attention, executive functions, and cognitive flexibility. It is thought that especially the spatial working memory and strategic thinking skills of adolescents may be susceptible to sleep deprivation. On the other hand, adolescents are predicted to be optimally successful in ideal sleep conditions, especially in the circumstances requiring for the short term storage of visual information, processing of stored information, and strategic thinking. The findings of this study may also be associated with possible negative functional effects on the processing of academic social and emotional inputs in adolescents for partial sleep deprivation. Acknowledgment: This research was supported by Cukurova University Scientific Research Projects Unit.

Keywords: attention, cognitive functions, sleep deprivation, working memory

Procedia PDF Downloads 136
527 Application of Multilayer Perceptron and Markov Chain Analysis Based Hybrid-Approach for Predicting and Monitoring the Pattern of LULC Using Random Forest Classification in Jhelum District, Punjab, Pakistan

Authors: Basit Aftab, Zhichao Wang, Feng Zhongke

Abstract:

Land Use and Land Cover Change (LULCC) is a critical environmental issue that has significant effects on biodiversity, ecosystem services, and climate change. This study examines the spatiotemporal dynamics of land use and land cover (LULC) across a three-decade period (1992–2022) in a district area. The goal is to support sustainable land management and urban planning by utilizing the combination of remote sensing, GIS data, and observations from Landsat satellites 5 and 8 to provide precise predictions of the trajectory of urban sprawl. In order to forecast the LULCC patterns, this study suggests a hybrid strategy that combines the Random Forest method with Multilayer Perceptron (MLP) and Markov Chain analysis. To predict the dynamics of LULC change for the year 2035, a hybrid technique based on multilayer Perceptron and Markov Chain Model Analysis (MLP-MCA) was employed. The area of developed land has increased significantly, while the amount of bare land, vegetation, and forest cover have all decreased. This is because the principal land types have changed due to population growth and economic expansion. The study also discovered that between 1998 and 2023, the built-up area increased by 468 km² as a result of the replacement of natural resources. It is estimated that 25.04% of the study area's urbanization will be increased by 2035. The performance of the model was confirmed with an overall accuracy of 90% and a kappa coefficient of around 0.89. It is important to use advanced predictive models to guide sustainable urban development strategies. It provides valuable insights for policymakers, land managers, and researchers to support sustainable land use planning, conservation efforts, and climate change mitigation strategies.

Keywords: land use land cover, Markov chain model, multi-layer perceptron, random forest, sustainable land, remote sensing.

Procedia PDF Downloads 14
526 Importance of Detecting Malingering Patients in Clinical Setting

Authors: Sakshi Chopra, Harsimarpreet Kaur, Ashima Nehra

Abstract:

Objectives: Malingering is fabricating or exaggerating the symptoms of mental or physical disorders for a variety of secondary gains or motives, which may include financial compensation; avoiding work; getting lighter criminal sentences; or simply to attract attention or sympathy. Malingering is different from somatization disorder and factitious disorder. The prevalence of malingering is unknown and difficult to determine. In an estimated study in forensic population, it can reach up to 17% cases. But the accuracy of such estimates is questionable as successful malingerers are not detected and thus, not included. Methods: The case study of a 58 years old, right handed, graduate, pre-morbidly working in a national company with reported history of stroke leading to head injury; cerebral infarction/facial palsy and dementia. He was referred for disability certification so that his job position can be transferred to his son as he could not work anymore. A series of Neuropsychological tests were administered. Results: With a mental age of < 2.5 years; social adaptive functioning was overall < 20 showing profound Mental Retardation, less than 1 year social age in abilities of self-help, eating, dressing, locomotion, occupation, communication, self-direction, and socialization; severely impaired verbal and performance ability, 96% impairment in Activities of Daily Living, with an indication of very severe depression. With inconsistent and fluctuating medical findings and problem descriptions to different health professionals forming the board for his disability, it was concluded that this patient was malingering. Conclusions: Even though it can be easily defined, malingering can be very challenging to diagnosis. Cases of malingering impose a substantial economic burden on the health care system and false attribution of malingering imposes a substantial burden of suffering on a significant proportion of the patient population. Timely, tactful diagnosis and management can help ease this patient burden on the healthcare system. Malingering can be detected by only trained mental health professionals in the clinical setting.

Keywords: disability, India, malingering, neuropsychological assessment

Procedia PDF Downloads 409
525 Evaluation of Different Anticoagulant Effects on Flow Properties of Human Blood Using Falling Needle Rheometer

Authors: Hiroki Tsuneda, Takamasa Suzuki, Hideki Yamamoto, Kimito Kawamura, Eiji Tamura, Katharina Wochner, Roberto Plasenzotti

Abstract:

Flow property of human blood is one of the important factors on the prevention of the circulatory condition such as a high blood pressure, a diabetes mellitus, and a cardiac infarction. However, the measurement of flow property of human blood, especially blood viscosity, is not so easy, because of their coagulation or aggregation behaviors after taking a sample from blood vessel. In the experiment, some kinds of anticoagulant were added into the human blood to avoid its solidification. Anticoagulant used in the blood test has been chosen for each purpose of blood test, for anticoagulant effect on blood is different mechanism for each. So that, there is a problem that the evaluation of measured blood property with different anticoagulant is so difficult. Therefore, it is so important to make clear the difference of anticoagulant effect on the blood property. In the previous work, a compact-size falling needle rheometer (FNR) has been developed in order to measure the flow property of human blood such as a flow curve, an apparent viscosity. It was found that FNR system can apply to a rheometer or a viscometry for various experimental conditions for not only human blood but also mammalians blood. In this study, the measurements of human blood viscosity with different anticoagulant (EDTA and Heparin) were carried out using newly developed FNR system. The effect of anticoagulant on blood viscosity was also tested by using the standard liquid for each. The accuracy on the viscometry was also tested by using the standard liquid for calibrating materials (JS-10, JS-20) and observed data have satisfactory agreement with reference data around 1.0% at 310K. The flow curve of six males and females with different anticoagulant were measured using FNR. In this experiment, EDTA and Heparin were chosen as anticoagulant for blood. Heparin can inhibit the coagulation of human blood by activating the body of anti-thrombin. To examine the effect of human blood viscosity on anticoagulant, flow curve was measured at high shear rate (>350s-1), and apparent viscosity of each person were determined with different anticoagulant. The apparent viscosity of human blood with heparin was 2%-9% higher than that with EDTA. However, the difference of blood viscosity for two anticoagulants for same blood was different for each. Further discussion, we need the consideration of effect on other physical property, such as cellular component and plasma component.

Keywords: falling-needle rheometer, human blood, viscosity, anticoagulant

Procedia PDF Downloads 433
524 Improving the Uniformity of Electrostatic Meter’s Spatial Sensitivity

Authors: Mohamed Abdalla, Ruixue Cheng, Jianyong Zhang

Abstract:

In pneumatic conveying, the solids are mixed with air or gas. In industries such as coal fired power stations, blast furnaces for iron making, cement and flour processing, the mass flow rate of solids needs to be monitored or controlled. However the current gas-solids two-phase flow measurement techniques are not as accurate as the flow meters available for the single phase flow. One of the problems that the multi-phase flow meters to face is that the flow profiles vary with measurement locations and conditions of pipe routing, bends, elbows and other restriction devices in conveying system as well as conveying velocity and concentration. To measure solids flow rate or concentration with non-even distribution of solids in gas, a uniform spatial sensitivity is required for a multi-phase flow meter. However, there are not many meters inherently have such property. The circular electrostatic meter is a popular choice for gas-solids flow measurement with its high sensitivity to flow, robust construction, low cost for installation and non-intrusive nature. However such meters have the inherent non-uniform spatial sensitivity. This paper first analyses the spatial sensitivity of circular electrostatic meter in general and then by combining the effect of the sensitivity to a single particle and the sensing volume for a given electrode geometry, the paper reveals first time how a circular electrostatic meter responds to a roping flow stream, which is much more complex than what is believed at present. The paper will provide the recent research findings on spatial sensitivity investigation at the University of Tees side based on Finite element analysis using Ansys Fluent software, including time and frequency domain characteristics and the effect of electrode geometry. The simulation results will be compared tothe experimental results obtained on a large scale (14” diameter) rig. The purpose of this research is paving a way to achieve a uniform spatial sensitivity for the circular electrostatic sensor by mean of compensation so as to improve overall accuracy of gas-solids flow measurement.

Keywords: spatial sensitivity, electrostatic sensor, pneumatic conveying, Ansys Fluent software

Procedia PDF Downloads 358
523 Classification of Forest Types Using Remote Sensing and Self-Organizing Maps

Authors: Wanderson Goncalves e Goncalves, José Alberto Silva de Sá

Abstract:

Human actions are a threat to the balance and conservation of the Amazon forest. Therefore the environmental monitoring services play an important role as the preservation and maintenance of this environment. This study classified forest types using data from a forest inventory provided by the 'Florestal e da Biodiversidade do Estado do Pará' (IDEFLOR-BIO), located between the municipalities of Santarém, Juruti and Aveiro, in the state of Pará, Brazil, covering an area approximately of 600,000 hectares, Bands 3, 4 and 5 of the TM-Landsat satellite image, and Self - Organizing Maps. The information from the satellite images was extracted using QGIS software 2.8.1 Wien and was used as a database for training the neural network. The midpoints of each sample of forest inventory have been linked to images. Later the Digital Numbers of the pixels have been extracted, composing the database that fed the training process and testing of the classifier. The neural network was trained to classify two forest types: Rain Forest of Lowland Emerging Canopy (Dbe) and Rain Forest of Lowland Emerging Canopy plus Open with palm trees (Dbe + Abp) in the Mamuru Arapiuns glebes of Pará State, and the number of examples in the training data set was 400, 200 examples for each class (Dbe and Dbe + Abp), and the size of the test data set was 100, with 50 examples for each class (Dbe and Dbe + Abp). Therefore, total mass of data consisted of 500 examples. The classifier was compiled in Orange Data Mining 2.7 Software and was evaluated in terms of the confusion matrix indicators. The results of the classifier were considered satisfactory, and being obtained values of the global accuracy equal to 89% and Kappa coefficient equal to 78% and F1 score equal to 0,88. It evaluated also the efficiency of the classifier by the ROC plot (receiver operating characteristics), obtaining results close to ideal ratings, showing it to be a very good classifier, and demonstrating the potential of this methodology to provide ecosystem services, particularly in anthropogenic areas in the Amazon.

Keywords: artificial neural network, computational intelligence, pattern recognition, unsupervised learning

Procedia PDF Downloads 353
522 Correlation Study between Clinical and Radiological Findings in Knee Osteoarthritis

Authors: Nabil A. A. Mohamed, Alaa A. A. Balbaa, Khaled E. Ayad

Abstract:

Osteoarthritis (OA) of the knee is the most common form of arthritis and leads to more activity limitations (e.g., disability in walking and stair climbing) than any other disease, especially in the elderly. Recently, impaired proprioceptive accuracy of the knee has been proposed as a local factor in the onset and progression of radiographic knee OA (ROA). Purpose: To compare the clinical and radiological findings in healthy with that of knee OA. Also, to determine if there is a correlation between the clinical and radiological findings in patients with knee OA. Subjects: Fifty one patients diagnosed as unilateral or bilateral knee OA with age ranged between 35-70 years, from both gender without any previous history of knee trauma or surgery, and twenty one normal subjects with age ranged from 35 - 68 years. METHODS: peak torque/body weight (PT/BW) was recorded from knee extensors at isokinetic isometric mode at angle of 45 degree. Also, the Absolute Angular Error was recorded at 45O and 30O to measure joint position sense (JPS). They made anteroposterior (AP) plain X-rays from standing semiflexed knee position and their average score of Timed Up and Go test(TUG) and WOMAC were recorded as a measure of knee pain, stiffness and function. Comparison between the mean values of different variables in the two groups was performed using unpaired student t test. The P value less or equal to 0.05 was considered significant. Results: There were significant differences between the studied variables between the experimental and control groups except the values of AAE at 30O. Also, there were no significant correlation between the clinical findings (pain, function, muscle strength and proprioception) and the severity of arthritic changes in X-rays. CONCLUSION: From the finding of the current study we can conclude that there were a significant difference between the both groups in all studied parameters (the WOMAC, functional level, quadriceps muscle strength and the joint proprioception). Also this study did not support the dependency on radiological findings in management of knee OA as the radiological features did not necessarily indicate the level of structural damage of patients with knee OA and we should consider the clinical features in our treatment plan.

Keywords: joint position sense, peak torque, proprioception, radiological knee osteoarthritis

Procedia PDF Downloads 294
521 Designing of Induction Motor Efficiency Monitoring System

Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei

Abstract:

Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.

Keywords: induction motor, efficiency, power losses, monitoring, embedded design

Procedia PDF Downloads 338
520 Urinary Volatile Organic Compound Testing in Fast-Track Patients with Suspected Colorectal Cancer

Authors: Godwin Dennison, C. E. Boulind, O. Gould, B. de Lacy Costello, J. Allison, P. White, P. Ewings, A. Wicaksono, N. J. Curtis, A. Pullyblank, D. Jayne, J. A. Covington, N. Ratcliffe, N. K. Francis

Abstract:

Background: Colorectal symptoms are common but only infrequently represent serious pathology, including colorectal cancer (CRC). A large number of invasive tests are presently performed for reassurance. We investigated the feasibility of urinary volatile organic compound (VOC) testing as a potential triage tool in patients fast-tracked for assessment for possible CRC. Methods: A prospective, multi-centre, observational feasibility study was performed across three sites. Patients referred on NHS fast-track pathways for potential CRC provided a urine sample which underwent Gas Chromatography Mass Spectrometry (GC-MS), Field Asymmetric Ion Mobility Spectrometry (FAIMS) and Selected Ion Flow Tube Mass Spectrometry (SIFT-MS) analysis. Patients underwent colonoscopy and/or CT colonography and were grouped as either CRC, adenomatous polyp(s), or controls to explore the diagnostic accuracy of VOC output data supported by an artificial neural network (ANN) model. Results: 558 patients participated with 23 (4.1%) CRC diagnosed. 59% of colonoscopies and 86% of CT colonographies showed no abnormalities. Urinary VOC testing was feasible, acceptable to patients, and applicable within the clinical fast track pathway. GC-MS showed the highest clinical utility for CRC and polyp detection vs. controls (sensitivity=0.878, specificity=0.882, AUROC=0.884). Conclusion: Urinary VOC testing and analysis are feasible within NHS fast-track CRC pathways. Clinically meaningful differences between patients with cancer, polyps, or no pathology were identified therefore suggesting VOC analysis may have future utility as a triage tool. Acknowledgment: Funding: NIHR Research for Patient Benefit grant (ref: PB-PG-0416-20022).

Keywords: colorectal cancer, volatile organic compound, gas chromatography mass spectrometry, field asymmetric ion mobility spectrometry, selected ion flow tube mass spectrometry

Procedia PDF Downloads 85
519 An Advanced Automated Brain Tumor Diagnostics Approach

Authors: Berkan Ural, Arif Eser, Sinan Apaydin

Abstract:

Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.

Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition

Procedia PDF Downloads 410
518 Central Vascular Function and Relaxibility in Beta-thalassemia Major Patients vs. Sickle Cell Anemia Patients by Abdominal Aorta and Aortic Root Speckle Tracking Echocardiography

Authors: Gehan Hussein, Hala Agha, Rasha Abdelraof, Marina George, Antoine Fakhri

Abstract:

Background: β-Thalassemia major (TM) and sickle cell disease (SCD) are inherited hemoglobin disorders resulting in chronic hemolytic anemia. Cardiovascular involvement is an important cause of morbidity and mortality in these groups of patients. The narrow border is between overt myocardial dysfunction and clinically silent left ventricular (LV) and / or right ventricular (RV) dysfunction in those patients. 3 D Speckle tracking echocardiography (3D STE) is a novel method for the detection of subclinical myocardial involvement. We aimed to study myocardial affection in SCD and TM using 3D STE, comparing it with conventional echocardiography, correlate it with serum ferritin level and lactate dehydrogenase (LDH). Methodology: Thirty SCD and thirty β TM patients, age range 4-18 years, were compared to 30 healthy age and sex matched control group. Cases were subjected to clinical examination, laboratory measurement of hemoglobin level, serum ferritin, and LDH. Transthoracic color Doppler echocardiography, 3D STE, tissue Doppler echocardiography, and aortic speckle tracking were performed. Results: significant reduction in global longitudinal strain (GLS), global circumferential strain (GCS), and global area strain (GAS) in SCD and TM than control (P value <0.001) there was significantly lower aortic speckle tracking in patients with TM and SCD than control (P value< 0.001). LDH was significantly higher in SCD than both TM and control and it correlated significantly positive mitral inflow E, (p value:0.022 and 0.072. r: 0.416 and -0.333 respectively) lateral E/E’ (p value.<0.001and 0.818. r. 0.618 and -0. 044.respectively) and septal E/E’ (p value 0.007 and 0.753& r value 0.485 and -0.060 respectively) in SCD but not TM and significant negative correlation between LDH and aortic root speckle tracking (value 0.681& r. -0.078.). The potential diagnostic accuracy of LDH in predicting vascular dysfunction as represented by aortic root GCS with a sensitivity 74% and aortic root GCS was predictive of LV dysfunction in SCD patients with sensitivity 100% Conclusion: 3D STE LV and RV systolic dysfunction in spite of their normal values by conventional echocardiography. SCD showed significantly lower right ventricular dysfunction and aortic root GCS than TM and control. LDH can be used to screen patients for cardiac dysfunction in SCD, not in TM

Keywords: thalassemia major, sickle cell disease, 3d speckle tracking echocardiography, LDH

Procedia PDF Downloads 161
517 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process

Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı

Abstract:

Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.

Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter

Procedia PDF Downloads 422