Search results for: receiver operator curve (ROC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1662

Search results for: receiver operator curve (ROC)

252 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 31
251 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 49
250 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.

Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry

Procedia PDF Downloads 103
249 Design, Simulation and Construction of 2.4GHz Microstrip Patch Antenna for Improved Wi-Fi Reception

Authors: Gabriel Ugalahi, Dominic S. Nyitamen

Abstract:

This project seeks to improve Wi-Fi reception by utilizing the properties of directional microstrip patch antennae. Where there is a dense population of Wi-Fi signal, several signal sources transmitting on the same frequency band and indeed channel constitutes interference to each other. The time it takes for request to be received, resolved and response given between a user and the resource provider is increased considerably. By deploying a directional patch antenna with a narrow bandwidth, the range of frequency received is reduced and should help in limiting the reception of signal from unwanted sources. A rectangular microstrip patch antenna (RMPA) is designed to operate at the Industrial Scientific and Medical (ISM) band (2.4GHz) commonly used in Wi-Fi network deployment. The dimensions of the antenna are calculated and these dimensions are used to generate a model on Advanced Design System (ADS), a microwave simulator. Simulation results are then analyzed and necessary optimization is carried out to further enhance the radiation quality so as to achieve desired results. Impedance matching at 50Ω is also obtained by using the inset feed method. Final antenna dimensions obtained after simulation and optimization are then used to implement practical construction on an FR-4 double sided copper clad printed circuit board (PCB) through a chemical etching process using ferric chloride (Fe2Cl). Simulation results show an RMPA operating at a centre frequency of 2.4GHz with a bandwidth of 40MHz. A voltage standing wave ratio (VSWR) of 1.0725 is recorded on a return loss of -29.112dB at input port showing an appreciable match in impedance to a source of 50Ω. In addition, a gain of 3.23dBi and directivity of 6.4dBi is observed during far-field analysis. On deployment, signal reception from wireless devices is improved due to antenna gain. A test source with a received signal strength indication (RSSI) of -80dBm without antenna installed on the receiver was improved to an RSSI of -61dBm. In addition, the directional radiation property of the RMPA prioritizes signals by pointing in the direction of a preferred signal source thus, reducing interference from undesired signal sources. This was observed during testing as rotation of the antenna on its axis resulted to the gain of signal in-front of the patch and fading of signals away from the front.

Keywords: advanced design system (ADS), inset feed, received signal strength indicator (RSSI), rectangular microstrip patch antenna (RMPA), voltage standing wave ratio (VSWR), wireless fidelity (Wi-Fi)

Procedia PDF Downloads 191
248 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)

Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini

Abstract:

Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.

Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria

Procedia PDF Downloads 73
247 Specific Language Impairment: Assessing Bilingual Children for Identifying Children with Specific Language Impairment (SLI)

Authors: Manish Madappa, Madhavi Gayathri Raman

Abstract:

The primary vehicle of human communication is language. A breakdown occurring in any aspect of communication may lead to frustration and isolation among the learners and the teachers. Over seven percent of the population in the world currently experience limitations and those children who exhibit a deviant/deficient language acquisition curve even when being in a language rich environment as their peers may be at risk of having a language disorder or language impairment. The difficulty may be in the word level [vocabulary/word knowledge] and/or the sentence level [syntax/morphology) Children with SLI appear to be developing normally in all aspects except for their receptive and/or expressive language skills. Thus, it is utmost importance to identify children with or at risk of SLI so that an early intervention can foster language and social growth, provide the best possible learning environment with special support for language to be explicitly taught and a step in providing continuous and ongoing support. The present study looks at Kannada English bilingual children and works towards identifying children at risk of “specific language impairment”. The study was conducted through an exploratory study which systematically enquired into the narratives of young Kannada-English bilinguals and to investigate the data for story structure in their narrative formulations. Oral narrative offers a rich source of data about a child’s language use in a relatively natural context. The fundamental objective is to ensure comparability and to be more universal and thus allows for the evaluation narrative text competence. The data was collected from 10 class three students at a primary school in Mysore, Karnataka and analyzed for macrostructure component reflecting the goal directed behavior of a protagonist who is motivated to carry out some kind of action with the intention of attaining a goal. The results show that the children exhibiting a deviation of -1.25 SD are at risk of SLI. Two learners were identified to be at risk of Specific Language Impairment with a standard deviation of more the 1.25 below the mean score.

Keywords: bilingual, oral narratives, SLI, macrostructure

Procedia PDF Downloads 261
246 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality

Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy

Abstract:

Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.

Keywords: wilms’ tumour, nephroblastoma, urology, survival

Procedia PDF Downloads 46
245 Urea and Starch Detection on a Paper-Based Microfluidic Device Enabled on a Smartphone

Authors: Shashank Kumar, Mansi Chandra, Ujjawal Singh, Parth Gupta, Rishi Ram, Arnab Sarkar

Abstract:

Milk is one of the basic and primary sources of food and energy as we start consuming milk from birth. Hence, milk quality and purity and checking the concentration of its constituents become necessary steps. Considering the importance of the purity of milk for human health, the following study has been carried out to simultaneously detect and quantify the different adulterants like urea and starch in milk with the help of a paper-based microfluidic device integrated with a smartphone. The detection of the concentration of urea and starch is based on the principle of colorimetry. In contrast, the fluid flow in the device is based on the capillary action of porous media. The microfluidic channel proposed in the study is equipped with a specialized detection zone, and it employs a colorimetric indicator undergoing a visible color change when the milk gets in touch or reacts with a set of reagents which confirms the presence of different adulterants in the milk. In our proposed work, we have used iodine to detect the percentage of starch in the milk, whereas, in the case of urea, we have used the p-DMAB. A direct correlation has been found between the color change intensity and the concentration of adulterants. A calibration curve was constructed to find color intensity and subsequent starch and urea concentration. The device has low-cost production and easy disposability, which make it highly suitable for widespread adoption, especially in resource-constrained settings. Moreover, a smartphone application has been developed to detect, capture, and analyze the change in color intensity due to the presence of adulterants in the milk. The low-cost nature of the smartphone-integrated paper-based sensor, coupled with its integration with smartphones, makes it an attractive solution for widespread use. They are affordable, simple to use, and do not require specialized training, making them ideal tools for regulatory bodies and concerned consumers.

Keywords: paper based microfluidic device, milk adulteration, urea detection, starch detection, smartphone application

Procedia PDF Downloads 31
244 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 110
243 Shear Strength Parameters of an Unsaturated Lateritic Soil

Authors: Jeferson Brito Fernades, Breno Padovezi Rocha, Roger Augusto Rodrigues, Heraldo Luiz Giacheti

Abstract:

The geotechnical projects demand the appropriate knowledge of soil characteristics and parameters. The determination of geotechnical soil parameters can be done by means of laboratory or in situ tests. In countries with tropical weather, like Brazil, unsaturated soils are very usual. In these soils, the soil suction has been recognized as an important stress state variable, which commands the geo-mechanical behavior. Triaxial and direct shear tests on saturated soils samples allow determine only the minimal soil shear strength, in other words, no suction contribution. This paper briefly describes the triaxial test with controlled suction as well as discusses the influence of suction on the shear strength parameters of a lateritic tropical sandy soil from a Brazilian research site. In this site, a sample pit was excavated to retrieve disturbed and undisturbed soil blocks. The samples extracted from these blocks were tested in laboratory to represent the soil from 1.5, 3.0 and 5.0 m depth. The stress curves and shear strength envelopes determined by triaxial tests varying suction and confining pressure are presented and discussed. The water retention characteristics on this soil complement this analysis. In situ CPT tests were also carried out at this site in different seasons of the year. In this case, the soil suction profile was determined by means of the soil water retention. This extra information allowed assessing how soil suction also affected the CPT data and the shear strength parameters estimative via correlation. The major conclusions of this paper are: the undisturbed soil samples contracted before shearing and the soil shear strength increased hyperbolically with suction; and it was possible to assess how soil suction also influenced CPT test data based on the water content soil profile as well as the water retention curve. This study contributed with a better understanding of the shear strength parameters and the soil variability of a typical unsaturated tropical soil.

Keywords: site characterization, triaxial test, CPT, suction, variability

Procedia PDF Downloads 383
242 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 92
241 Comparison of Stereotactic Body Radiation Therapy Virtual Treatment Plans Obtained With Different Collimators in the Cyberknife System in Partial Breast Irradiation: A Retrospective Study

Authors: Öznur Saribaş, Si̇bel Kahraman Çeti̇ntaş

Abstract:

It is aimed to compare target volume and critical organ doses by using CyberKnife (CK) in accelerated partial breast irradiation (APBI) in patients with early stage breast cancer. Three different virtual plans were made for Iris, fixed and multi-leaf collimator (MLC) for 5 patients who received radiotherapy in the CyberKnife system. CyberKnife virtual plans were created, with 6 Gy per day totaling 30 Gy. Dosimetric parameters for the three collimators were analyzed according to the restrictions in the NSABP-39/RTOG 0413 protocol. The plans ensured critical organs were protected and GTV received 95 % of the prescribed dose. The prescribed dose was defined by the isodose curve of a minimum of 80. Homogeneity index (HI), conformity index (CI), treatment time (min), monitor unit (MU) and doses taken by critical organs were compared. As a result of the comparison of the plans, a significant difference was found for the duration of treatment, MU. However, no significant difference was found for HI, CI. V30 and V15 values of the ipsi-lateral breast were found in the lowest MLC. There was no significant difference between Dmax values for lung and heart. However, the mean MU and duration of treatment were found in the lowest MLC. As a result, the target volume received the desired dose in each collimator. The contralateral breast and contralateral lung doses were the lowest in the Iris. Fixed collimator was found to be more suitable for cardiac doses. But these values did not make a significant difference. The use of fixed collimators may cause difficulties in clinical applications due to the long treatment time. The choice of collimator in breast SBRT applications with CyberKnife may vary depending on tumor size, proximity to critical organs and tumor localization.

Keywords: APBI, CyberKnife, early stage breast cancer, radiotherapy.

Procedia PDF Downloads 98
240 Spatial Setting in Translation: A Comparative Evaluation of translations from Pre-Islamic Poetry

Authors: Raja Lahiani

Abstract:

This study is concerned with scrutinising translations into English and French of references to locations in the desert of pre-Islamic Arabia. These references are used in the Source Text (ST) within a poetic image. Reference is made to the names of three different mountains in Arabia, namely Qatan, Sitar, and Yadhbul. As these mountains are referred to in the context of the poet’s description of the density and expansion of the clouds, it is crucial to know that while Sitar and Yadhbul are close to each other, Qatan is far away from them. This distance was functional for the poet to describe the expansion of the clouds. This reflects the spacious place (desert) he handled, and the fact that it was possible for him to physically see what he described. The purpose of this image is for the poet to communicate the vastness of the space he managed to see as he was in a moment of contemplation. Thus, knowledge of this characteristic about the setting is capital for the receiver to understand the communicative function of the verse. A corpus of eighteen translations is gathered. These vary between verse and prose renderings. The methodology adopted in this research work is comparative. Comparison is conducted at both the synchronic and diachronic levels; every translation shall be compared to the ST and then to previous translations. The comparative work will prove at the end that the translators who target historical facts do not necessarily succeed in preserving the image of the ST. It also proves that the more recent the translation is, the deeper the translator’s awareness is the link between imagery, setting, and point of view. Since the late eighteenth century and until nowadays, pre-Islamic poetry has been translated into Western languages. Translators differ as to motives, sources, priorities and intellectual backgrounds. A translator's skopoi undoubtedly affect the way s/he handles aspects of the ST. When it comes to culture-specific aspects and details related to setting, the problem is even more complex. Setting is a very important factor that reveals a great deal of the culture of pre-Islamic Arabia as this is remote in place, historical framework and literary tradition from its translators. History is present in pre-Islamic poetry, which justifies the important literature that has been written to extract information and data from it. These are imbedded not only by signalling given facts, events, and meditations but also by means of references to specific locations and landmarks that used to exist at the time. Spatial setting is an integral part of a literary text as it places it within its historical context. The importance of the translator’s awareness of spatial anthropological data before indulging in the process of translation is tested. This is also crucial in measuring the effect of setting loss and setting gain in translation. The findings of this research would ultimately evaluate the extent to which a comparative methodology is reliable in investigating the role of spatial setting awareness in translation.

Keywords: historical context, translation, comparative literature, spatial setting

Procedia PDF Downloads 221
239 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 159
238 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India

Authors: Zafar Beg, Kumar Gaurav

Abstract:

The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.

Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River

Procedia PDF Downloads 160
237 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 487
236 Study of the Relationship between the Civil Engineering Parameters and the Floating of Buoy Model Which Made from Expanded Polystyrene-Mortar

Authors: Panarat Saengpanya

Abstract:

There were five objectives in this study including the study of housing type with water environment, the physical and mechanical properties of the buoy material, the mechanical properties of the buoy models, the floating of the buoy models and the relationship between the civil engineering parameters and the floating of the buoy. The buoy examples made from Expanded Polystyrene (EPS) covered by 5 mm thickness of mortar with the equal thickness on each side. Specimens are 0.05 m cubes tested at a displacement rate of 0.005 m/min. The existing test method used to assess the parameters relationship is ASTM C 109 to provide comparative results. The results found that the three type of housing with water environment were Stilt Houses, Boat House, and Floating House. EPS is a lightweight material that has been used in engineering applications since at least the 1950s. Its density is about a hundredth of that of mortar, while the mortar strength was found 72 times of EPS. One of the advantage of composite is that two or more materials could be combined to take advantage of the good characteristics of each of the material. The strength of the buoy influenced by mortar while the floating influenced by EPS. Results showed the buoy example compressed under loading. The Stress-Strain curve showed the high secant modulus before reached the peak value. The failure occurred within 10% strain then the strength reduces while the strain was continuing. It was observed that the failure strength reduced by increasing the total volume of examples. For the buoy examples with same area, an increase of the failure strength is found when the high dimension is increased. The results showed the relationship between five parameters including the floating level, the bearing capacity, the volume, the high dimension and the unit weight. The study found increases in high of buoy lead to corresponding decreases in both modulus and compressive strength. The total volume and the unit weight had relationship with the bearing capacity of the buoy.

Keywords: floating house, buoy, floating structure, EPS

Procedia PDF Downloads 118
235 Experimental Investigation on the Effect of Cross Flow on Discharge Coefficient of an Orifice

Authors: Mathew Saxon A, Aneeh Rajan, Sajeev P

Abstract:

Many fluid flow applications employ different types of orifices to control the flow rate or to reduce the pressure. Discharge coefficients generally vary from 0.6 to 0.95 depending on the type of the orifice. The tabulated value of discharge coefficients of various types of orifices available can be used in most common applications. The upstream and downstream flow condition of an orifice is hardly considered while choosing the discharge coefficient of an orifice. But literature shows that the discharge coefficient can be affected by the presence of cross flow. Cross flow is defined as the condition wherein; a fluid is injected nearly perpendicular to a flowing fluid. Most researchers have worked on water being injected into a cross-flow of water. The present work deals with water to gas systems in which water is injected in a normal direction into a flowing stream of gas. The test article used in the current work is called thermal regulator, which is used in a liquid rocket engine to reduce the temperature of hot gas tapped from the gas generator by injecting water into the hot gas so that a cooler gas can be supplied to the turbine. In a thermal regulator, water is injected through an orifice in a normal direction into the hot gas stream. But the injection orifice had been calibrated under backpressure by maintaining a stagnant gas medium at the downstream. The motivation of the present study aroused due to the observation of a lower Cd of the orifice in flight compared to the calibrated Cd. A systematic experimental investigation is carried out in this paper to study the effect of cross-flow on the discharge coefficient of an orifice in water to a gas system. The study reveals that there is an appreciable reduction in the discharge coefficient with cross flow compared to that without cross flow. It is found that the discharge coefficient greatly depends on the ratio of momentum of water injected to the momentum of the gas cross flow. The effective discharge coefficient of different orifices was normalized using the discharge coefficient without cross-flow and it is observed that normalized curves of effective discharge coefficient of different orifices with momentum ratio collapsing into a single curve. Further, an equation is formulated using the test data to predict the effective discharge coefficient with cross flow using the calibrated Cd value without cross flow.

Keywords: cross flow, discharge coefficient, orifice, momentum ratio

Procedia PDF Downloads 113
234 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures

Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý

Abstract:

Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.

Keywords: electron diffraction spectroscopy, high strength concrete, interfacial transition zone, normal strength concrete, scanning electron microscopy

Procedia PDF Downloads 270
233 An Experimental Investigation of Rehabilitation and Strengthening of Reinforced Concrete T-Beams Under Static Monotonic Increasing Loading

Authors: Salem Alsanusi, Abdulla Alakad

Abstract:

An experimental investigation to study the behaviour of under flexure reinforced concrete T-Beams. Those Beams were loaded to pre-designated stress levels as percentage of calculated collapse loads. Repairing these beans by either reinforced concrete jacket, or by externally bolted steel plates were utilized. Twelve full scale beams were tested in this experimental program scheme. Eight out of the twelve beams were loaded under different loading levels. Tests were performed for the beams before and after repair with Reinforced Concrete Jacket (RCJ). The applied Load levels were 60%, 77% and 100% of the calculated collapse loads. The remaining four beams were tested before and after repair with Bolted Steel Plate (BSP). Furthermore, out previously mentioned four beams two beams were loaded to the calculated failure load 100% and the remaining two beams were not subjected to any load. The eight beams recorded for the RCJ test were repaired using reinforced concrete jacket. The four beams recorded for the BSP test were all repaired using steel plate at the bottom. All the strengthened beams were gradually loaded until failure occurs. However, in each loading case, the beams behaviour, before and after strengthening, were studied through close inspection of the cracking propagation, and by carrying out an extensive measurement of deformations and strength. The stress-strain curve for reinforcing steel and the failure strains measured in the tests were utilized in the calculation of failure load for the beams before and after strengthening. As a result, the calculated failure loads were close to the actual failure tests in case of beams before repair, ranging from 85% to 90% and also in case of beams repaired by reinforced concrete jacket ranging from 70% to 85%. The results were in case of beams repaired by bolted steel plates ranging from (50% to 85%). It was observed that both jacketing and bolted steel plate methods could effectively restore the full flexure capacity of the damaged beams. However, the reinforced jacket has increased the failure load by about 67%, whereas the bolted steel plates recovered the failure load.

Keywords: rehabilitation, strengthening, reinforced concrete, beams deflection, bending stresses

Procedia PDF Downloads 286
232 Performance of Reinforced Concrete Beams under Different Fire Durations

Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam

Abstract:

Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.

Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire

Procedia PDF Downloads 115
231 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 320
230 Microstructure of Virgin and Aged Asphalts by Small-Angle X-Ray Scattering

Authors: Dong Tang, Yongli Zhao

Abstract:

The study of the microstructure of asphalt is of great importance for the analysis of its macroscopic properties. However, the peculiarities of the chemical composition of the asphalt itself and the limitations of existing direct imaging techniques have caused researchers to face many obstacles in studying the microstructure of asphalt. The advantage of small-angle X-ray scattering (SAXS) is that it allows quantitative determination of the internal structure of opaque materials and is suitable for analyzing the microstructure of materials. Therefore, the SAXS technique was used to study the evolution of microstructures on the nanoscale during asphalt aging. And the reasons for the change in scattering contrast during asphalt aging were also explained with the help of Fourier transform infrared spectroscopy (FTIR). SAXS experimental results show that the SAXS curves of asphalt are similar to the scattering curves of scattering objects with two-level structures. The Porod curve for asphalt shows that there is no obvious interface between the micelles and the surrounding mediums, and there is only a fluctuation of the hot electron density between the two. The Beaucage model fit SAXS patterns shows that the scattering coefficient P of the asphaltene clusters as well as the size of the micelles, gradually increase with the aging of the asphalt. Furthermore, aggregation exists between the micelles of asphalt and becomes more pronounced with increasing aging. During asphalt aging, the electron density difference between the micelles and the surrounding mediums gradually increases, leading to an increase in the scattering contrast of the asphalt. Under long-term aging conditions due to the gradual transition from maltenes to asphaltenes, the electron density difference between the micelles and the surrounding mediums decreases, resulting in a decrease in the scattering contrast of asphalt SAXS. Finally, this paper correlates the macroscopic properties of asphalt with microstructural parameters, and the results show that the high-temperature rutting resistance of asphalt is enhanced and the low-temperature cracking resistance decreases due to the aggregation of micelles and the generation of new micelles. These results are useful for understanding the relationship between changes in microstructure and changes in properties during asphalt aging and provide theoretical guidance for the regeneration of aged asphalt.

Keywords: asphalt, Beaucage model, microstructure, SAXS

Procedia PDF Downloads 55
229 Hypertension and Obesity: A Cross-National Comparison of BMI and Waist-Height Ratio

Authors: Adam M. Yates, Julie E. Byles

Abstract:

Hypertension has been identified as a prominent co-morbidity of obesity. To improve clinical intervention of hypertension, it is critical to identify metrics that most accurately reflect risk for increased morbidity. Two of the most relevant and accurate measures for increased risk of hypertension due to excess adipose tissue are Body Mass Index (BMI) and Waist-Height Ratio (WHtR). Previous research has examined these measures in cross-national and cross-ethnic studies, but has most often relied on secondary means such as meta-analysis to identify and evaluate the efficacy of individual body mass measures. In this study, we instead use cross-sectional analysis to assess the cross-ethnic discriminative power of BMI and WHtR to predict risk of hypertension. Using the WHO SAGE survey, which collected anthropometric and biometric data from respondents in six middle-income countries (China, Ghana, India, Mexico, Russia, South Africa), we implement logistic regression to examine the discriminative power of measured BMI and WHtR with a known population of hypertensive and non-hypertensive respondents. We control for gender and age to identify whether optimum cut-off points that are adequately sensitive as tests for risk of hypertension may be different between groups. We report results for OR, RR, and ROC curves for each of the six SAGE countries. As seen in existing literature, results demonstrate that both WHtR and BMI are significant predictors of hypertension (p < .01). For these six countries, we find that cut-off points for WHtR may be dependent upon gender, age and ethnicity. While an optimum omnibus cut-point for WHtR may be 0.55, results also suggest that the gender and age relationship with WHtR may warrant the development of individual cut-offs to optimize health outcomes. Trends through multiple countries show that the optimum cut-point for WHtR increases with age while the area under the curve (AUROC) decreases for both men and women. Comparison between BMI and WHtR indicate that BMI may remain more robust than WHtR. Implications for public health policy are discussed.

Keywords: hypertension, obesity, Waist-Height ratio, SAGE

Procedia PDF Downloads 451
228 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 387
227 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 61
226 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 72
225 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 146
224 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers

Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan

Abstract:

Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.

Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality

Procedia PDF Downloads 98
223 Predictive Value Modified Sick Neonatal Score (MSNS) On Critically Ill Neonates Outcome Treated in Neonatal Intensive Care Unit (NICU)

Authors: Oktavian Prasetia Wardana, Martono Tri Utomo, Risa Etika, Kartika Darma Handayani, Dina Angelika, Wurry Ayuningtyas

Abstract:

Background: Critically ill neonates are newborn babies with high-risk factors that potentially cause disability and/or death. Scoring systems for determining the severity of the disease have been widely developed as well as some designs for use in neonates. The SNAPPE-II method, which has been used as a mortality predictor scoring system in several referral centers, was found to be slow in assessing the outcome of critically ill neonates in the Neonatal Intensive Care Unit (NICU). Objective: To analyze the predictive value of MSNS on the outcome of critically ill neonates at the time of arrival up to 24 hours after being admitted to the NICU. Methods: A longitudinal observational analytic study based on medical record data was conducted from January to August 2022. Each sample was recorded from medical record data, including data on gestational age, mode of delivery, APGAR score at birth, resuscitation measures at birth, duration of resuscitation, post-resuscitation ventilation, physical examination at birth (including vital signs and any congenital abnormalities), the results of routine laboratory examinations, as well as the neonatal outcomes. Results: This study involved 105 critically ill neonates who were admitted to the NICU. The outcome of critically ill neonates was 50 (47.6%) neonates died, and 55 (52.4%) neonates lived. There were more males than females (61% vs. 39%). The mean gestational age of the subjects in this study was 33.8 ± 4.28 weeks, with the mean birth weight of the subjects being 1820.31 ± 33.18 g. The mean MSNS score of neonates with a deadly outcome was lower than that of the lived outcome. ROC curve with a cut point MSNS score <10.5 obtained an AUC of 93.5% (95% CI: 88.3-98.6) with a sensitivity value of 84% (95% CI: 80.5-94.9), specificity 80 % (CI 95%: 88.3-98.6), Positive Predictive Value (PPV) 79.2%, Negative Predictive Value (NPV) 84.6%, Risk Ratio (RR) 5.14 with Hosmer & Lemeshow test results p>0.05. Conclusion: The MSNS score has a good predictive value and good calibration of the outcomes of critically ill neonates admitted to the NICU.

Keywords: critically ill neonate, outcome, MSNS, NICU, predictive value

Procedia PDF Downloads 43