Search results for: minimum spanning algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5554

Search results for: minimum spanning algorithm

244 The in Vitro and in Vivo Antifungal Activity of Terminalia Mantaly on Aspergillus Species Using Drosophila melanogaster (UAS-Diptericin) As a Model

Authors: Ponchang Apollos Wuyep, Alice Njolke Mafe, Longchi Satkat Zacheaus, Dogun Ojochogu, Dabot Ayuba Yakubu

Abstract:

Fungi causes huge losses when infections occur both in plants and animals. Synthetic Antifungal drugs are mostly very expensive and highly cytotoxic when taken. This study was aimed at determining the in vitro and in vivo antifungal activities of the leaves and stem extracts of Terminalia mantaly (Umbrella tree)H. Perrier on Aspergillus species in a bid to identify potential sources of cheap starting materials for the synthesis of new drugs to address the growing antimicrobial resistance. T. mantaly leave and stem powdered plant was extracted by fractionation using the method of solvent partition co-efficient in their graded form in the order n-hexane, Ethyl acetate, methanol and distilled water and phytochemical screening of each fraction revealed the presence of alkaloids, saponins, Tannins, flavonoids, carbohydrates, steroids, anthraquinones, cardiac glycosides and terpenoids in varying degrees. The Agar well diffusion technique was used to screen for antifungal activity of the fractions on clinical isolates of Aspergillus species (Aspergillus flavus and Aspergillus fumigatus). Minimum inhibitory concentration (MIC50) of the most active extracts was determined by the broth dilution method. The fractions test indicated a high antifungal activity with zones of inhibition ranging from 6 to 26 mm and 8 to 30mm (leave fractions) and 10mm to 34mm and 14mm to36mm (stem fractions) on A. flavus and A. fumigatus respectively. All the fractions indicated antifungal activity in a dose response relationship at concentrations of 62.5mg/ml, 125mg/ml, 250mg/ml and 500mg/ml. Better antifungal efficacy was shown by the Ethyl acetate, Hexane and Methanol fractions in the in vitro as the most potent fraction with MIC ranging from 62.5 to 125mg/ml. There was no statistically significant difference (P>0.05) in the potency of the Eight fractions from leave and stem (Hexane, Ethyl acetate, methanol and distilled water, antifungal (fluconazole), which served as positive control and 10% DMSO(Dimethyl Sulfoxide)which served as negative control. In the in vivo investigations, the ingestion technique was used for the infectious studies Female Drosophilla melanogaster(UAS-Diptericin)normal flies(positive control),infected and not treated flies (negative control) and infected flies with A. fumigatus and placed on normal diet, diet containing fractions(MSM and HSM each at concentrations of 10mg/ml 20mg/ml, 30mg/ml, 40mg/ml, 50mg/ml, 60mg/ml, 70mg/ml, 80mg/ml, 90mg/ml and 100mg/ml), diet containing control drugs(fluconazole as positive control)and infected flies on normal diet(negative control), the flies were observed for fifteen(15) days. Then the total mortality of flies was recorded each day. The results of the study reveals that the flies were susceptible to infection with A. fumigatus and responded to treatment with more effectiveness at 50mg/ml, 60mg/ml and 70mg/ml for both the Methanol and Hexane stem fractions. Therefore, the Methanol and Hexane stem fractions of T. mantaly contain therapeutically useful compounds, justifying the traditional use of this plant for the treatment of fungal infections.

Keywords: Terminalia mantaly, Aspergillus fumigatus, cytotoxic, Drosophila melanogaster, antifungal

Procedia PDF Downloads 83
243 Determination of the Presence of Antibiotic Resistance from Vibrio Species in Northern Italy

Authors: Tramuta Clara, Masotti Chiara, Pitti Monica, Adriano Daniela, Battistini Roberta, Serraca Laura, Decastelli Lucia

Abstract:

Oysters are considered filter organisms, and their raw consumption may increase health risks for consumers: it is often associated with outbreaks of gastroenteritis or enteric illnesses. Most of these foodborne diseases are caused by Vibrio strains, enteric pathogens also involved in the diffusion of genetic determinants of antibiotic resistance and their entrance along the food chain. The European Food Safety Authority (EFSA), during the European Union report on antimicrobial resistance in 2017, focused the attention about the role of food as a possible carrier of antibiotic-resistant bacteria or antibiotic-resistance genes that determine health risks for humans. This study wants to determine antibiotic resistance and antibiotic-resistance genes in Vibrio spp. isolated from Crassostrea gigas oysters collected in the Golfo della Spezia (Liguria, Italy). A total of 47 Vibrio spp. strains were isolated (ISO21872-2:2017) during the summer of 2021 from oysters of Crassostrea gigas. The strains were identified by MALDI-TOF (Bruker, Germany) mass spectrometry and tested for antibiotic susceptibility using a broth microdiluition method (ISO20776-1:2019) using Sensititre EUVSEC plates (Thermo-Fisher Scientific) to obtain the Minimum Inhibitory Concentration (MIC). The strains were tested with PCR-based biomolecular methods, according to previous works, to define the presence of 23 resistance genes of the main classes of antibiotics used in human and veterinary medicine: tet (B), tet (C), tet (D), tet (A), tet (E), tet (G ), tet (K), tet (L), tet (M), tet (O), tet (S) (tetracycline resistance); blaCTX-M, blaTEM, blaOXA, blaSHV (β-lactam resistance); mcr-1 and mcr-2 (colistin resistance); qnrA, qnrB, and qnrS (quinolone resistance); sul1, sul2 and sul3 (sulfonamide resistance). Six different species have been identified: V. alginolyticus 34% (n=16), V. harveyi 28% (n=13), V. fortis 15% (n=7), V. pelagius 8% (n=4), V. parahaemolyticus 11% (n=5) e V. chagasii 4% (n=2). The PCR assays showed the presence of the blaTEM gene on 40% of the strains (n=19). All the other genes were not detected, except for a V. alginolyticus positive for anrS gene. The broth microdiluition method results showed an high level of resistance for ciprofloxacin (62%; n=29), ampicillin (47%; n=22), and colistin (49%; n=23). Furthermore, 32% (n=15) of strains can be considered multiresistant bacteria for the simultaneous presence of resistance for three different antibiotic classes. Susceptibility towards meropenem, azithromycin, gentamicin, ceftazidime, cefotaxime, chloramphenicol, tetracycline and sulphamethoxazole reached 100%. The Vibrio species identified in this study are widespread in marine environments and can cause gastrointerstinal infections after the ingestion of raw fish products and bivalve molluscs. The level of resistance to antibiotics such as ampicillin, ciprofloxacin and colistin can be connected to anthropic factors (industrial, agricultural and domestic wastes) that promote the spread of resistance to these antibiotics. It can be also observed a strong correlation between phenotypic (resistant MIC) and genotypic (positive blaTEM gene) resistance for ampicillin on the same strains, probably due to the transfer of genetic material between bacterial strains. Consumption of raw bivalve molluscs can represent a risk for consumers heath due to the potentially presence of foodborne pathogens, highly resistant to different antibiotics and source of transferable antibiotic-resistant genes.

Keywords: vibrio species, blaTEM genes, antimicrobial resistance, PCR

Procedia PDF Downloads 75
242 Hidro-IA: An Artificial Intelligent Tool Applied to Optimize the Operation Planning of Hydrothermal Systems with Historical Streamflow

Authors: Thiago Ribeiro de Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite

Abstract:

The area of the electricity sector that deals with energy needs by the hydroelectric in a coordinated manner is called Operation Planning of Hydrothermal Power Systems (OPHPS). The purpose of this is to find a political operative to provide electrical power to the system in a given period, with reliability and minimal cost. Therefore, it is necessary to determine an optimal schedule of generation for each hydroelectric, each range, so that the system meets the demand reliably, avoiding rationing in years of severe drought, and that minimizes the expected cost of operation during the planning, defining an appropriate strategy for thermal complementation. Several optimization algorithms specifically applied to this problem have been developed and are used. Although providing solutions to various problems encountered, these algorithms have some weaknesses, difficulties in convergence, simplification of the original formulation of the problem, or owing to the complexity of the objective function. An alternative to these challenges is the development of techniques for simulation optimization and more sophisticated and reliable, it can assist the planning of the operation. Thus, this paper presents the development of a computational tool, namely Hydro-IA for solving optimization problem identified and to provide the User an easy handling. Adopted as intelligent optimization technique is Genetic Algorithm (GA) and programming language is Java. First made the modeling of the chromosomes, then implemented the function assessment of the problem and the operators involved, and finally the drafting of the graphical interfaces for access to the User. The results with the Genetic Algorithms were compared with the optimization technique nonlinear programming (NLP). Tests were conducted with seven hydroelectric plants interconnected hydraulically with historical stream flow from 1953 to 1955. The results of comparison between the GA and NLP techniques shows that the cost of operating the GA becomes increasingly smaller than the NLP when the number of hydroelectric plants interconnected increases. The program has managed to relate a coherent performance in problem resolution without the need for simplification of the calculations together with the ease of manipulating the parameters of simulation and visualization of output results.

Keywords: energy, optimization, hydrothermal power systems, artificial intelligence and genetic algorithms

Procedia PDF Downloads 419
241 A Regulator's Assessment of Consumer Risk When Evaluating a User Test for an Umbrella Brand Name in an over the Counter Medicine

Authors: A. Bhatt, C. Bassi, H. Farragher, J. Musk

Abstract:

Background: All medicines placed on the EU market are legally required to be accompanied by labeling and package leaflet, which provide comprehensive information, enabling its safe and appropriate use. Mock-ups with results of assessments using a target patient group must be submitted for a marketing authorisation application. Consumers need confidence in non-prescription, OTC medicines in order to manage their minor ailments and umbrella brands assist purchasing decisions by assisting easy identification within a particular therapeutic area. A number of regulatory agencies have risk management tools and guidelines to assist in developing umbrella brands for OTC medicines, however assessment and decision making is subjective and inconsistent. This study presents an evaluation in the UK following the US FDA warning concerning methaemoglobinaemia following 21 reported cases (11 children under 2 years) caused by OTC oral analgesics containing benzocaine. METHODS: A standard face to face, 25 structured task based user interview testing methodology using a standard questionnaire and rating scale in consumers aged 15-91 years, was conducted independently between June and October 2015 in their homes. Whether individuals could discriminate between the labelling, safety information and warnings on cartons and PILs between 3 different OTC medicines packs with the same umbrella name was evaluated. Each pack was presented with differing information hierarchy using, different coloured cartons, containing the 3 different active ingredients, benzocaine (oromucosal spray) and two lozenges containing 2, 4, dichlorobenzyl alcohol, amylmetacresol and hexylresorcinol respectively (for the symptomatic relief of sore throat pain). The test was designed to determine whether warnings on the carton and leaflet were prominent, accessible to alert users that one product contained benzocaine, risk of methaemoglobinaemia, and refer to the leaflet for the signs of the condition and what to do should this occur. Results: Two consumers did not locate the warnings on the side of the pack, eventually found them on the back and two suggestions to further improve accessibility of the methaemoglobinaemia warning. Using a gold pack design for the oromucosal spray, all consumers could differentiate between the 3 drugs, minimum age particulars, pharmaceutical form and the risk factor methaemoglobinaemia. The warnings for benzocaine were deemed to be clear or very clear; appearance of the 3 packs were either very well differentiated or quite well differentiated. The PIL test passed on all criteria. All consumers could use the product correctly, identify risk factors ensuring the critical information necessary for the safe use was legible and easily accessible so that confusion and errors were minimised. Conclusion: Patients with known methaemoglobinaemia are likely to be vigilant in checking for benzocaine containing products, despite similar umbrella brand names across a range of active ingredients. Despite these findings, the package design and spray format were not deemed to be sufficient to mitigate potential safety risks associated with differences in target populations and contraindications when submitted to the Regulatory Agency. Although risk management tools are increasingly being used by agencies to assist in providing objective assurance of package safety, further transparency, reduction in subjectivity and proportionate risk should be demonstrated.

Keywords: labelling, OTC, risk, user testing

Procedia PDF Downloads 308
240 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings

Authors: Chen Wang, Jared Evans, Yan Asmann

Abstract:

With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.

Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing

Procedia PDF Downloads 255
239 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 175
238 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal

Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi

Abstract:

The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.

Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation

Procedia PDF Downloads 65
237 Classification of Emotions in Emergency Call Center Conversations

Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko

Abstract:

The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.

Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning

Procedia PDF Downloads 395
236 Structural Monitoring of Externally Confined RC Columns with Inadequate Lap-Splices, Using Fibre-Bragg-Grating Sensors

Authors: Petros M. Chronopoulos, Evangelos Z. Astreinidis

Abstract:

A major issue of the structural assessment and rehabilitation of existing RC structures is the inadequate lap-splicing of the longitudinal reinforcement. Although prohibited by modern Design Codes, the practice of arranging lap-splices inside the critical regions of RC elements was commonly applied in the past. Today this practice is still the rule, at least for conventional new buildings. Therefore, a lot of relevant research is ongoing in many earthquake prone countries. The rehabilitation of deficient lap-splices of RC elements by means of external confinement is widely accepted as the most efficient technique. If correctly applied, this versatile technique offers a limited increase of flexural capacity and a considerable increase of local ductility and of axial and shear capacities. Moreover, this intervention does not affect the stiffness of the elements and does not affect the dynamic characteristics of the structure. This technique has been extensively discussed and researched contributing to vast accumulation of technical and scientific knowledge that has been reported in relevant books, reports and papers, and included in recent Design Codes and Guides. These references are mostly dealing with modeling and redesign, covering both the enhanced (axial and) shear capacity (due to the additional external closed hoops or jackets) and the increased ductility (due to the confining action, preventing the unzipping of lap-splices and the buckling of continuous reinforcement). An analytical and experimental program devoted to RC members with lap-splices is completed in the Lab. of RC/NTU of Athens/GR. This program aims at the proposal of a rational and safe theoretical model and the calibration of the relevant Design Codes’ provisions. Tests, on forty two (42) full scale specimens, covering mostly beams and columns (not walls), strengthened or not, with adequate or inadequate lap-splices, have been already performed and evaluated. In this paper, the results of twelve (12) specimens under fully reversed cyclic actions are presented and discussed. In eight (8) specimens the lap-splices were inadequate (splicing length of 20 or 30 bar diameters) and they were retrofitted before testing by means of additional external confinement. The two (2) most commonly applied confining materials were used in this study, namely steel and FRPs. More specifically, jackets made of CFRP wraps or light cages made of mild steel were applied. The main parameters of these tests were (i) the degree of confinement (internal and external), and (ii) the length of lap-splices, equal to 20, 30 or 45 bar diameters. These tests were thoroughly instrumented and monitored, by means of conventional (LVDTs, strain gages, etc.) and innovative (optic fibre-Bragg-grating) sensors. This allowed for a thorough investigation of the most influencing design parameter, namely the hoop-stress developed in the confining material. Based on these test results and on comparisons with the provisions of modern Design Codes, it could be argued that shorter (than the normative) lap-splices, commonly found in old structures, could still be effective and safe (at least for lengths more than an absolute minimum), depending on the required ductility, if a properly arranged and adequately detailed external confinement is applied.

Keywords: concrete, fibre-Bragg-grating sensors, lap-splices, retrofitting / rehabilitation

Procedia PDF Downloads 249
235 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 275
234 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 307
233 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer

Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD

Procedia PDF Downloads 506
232 The Yield of Neuroimaging in Patients Presenting to the Emergency Department with Isolated Neuro-Ophthalmological Conditions

Authors: Dalia El Hadi, Alaa Bou Ghannam, Hala Mostafa, Hana Mansour, Ibrahim Hashim, Soubhi Tahhan, Tharwat El Zahran

Abstract:

Introduction: Neuro-ophthalmological emergencies require prompt assessment and management to avoid vision or life-threatening sequelae. Some would require neuroimaging. Most commonly used are the CT and MRI of the Brain. They can be over-used when not indicated. Their yield remains dependent on multiple factors relating to the clinical scenario. Methods: A retrospective cross-sectional study was conducted by reviewing the electronic medical records of patients presenting to the Emergency Department (ED) with isolated neuro-ophthalmologic complaints. For each patient, data were collected on the clinical presentation, whether neuroimaging was performed (and which type), and the result of neuroimaging. Analysis of the performed neuroimaging was made, and its yield was determined. Results: A total of 211 patients were reviewed. The complaints or symptoms at presentation were: blurry vision, change in the visual field, transient vision loss, floaters, double vision, eye pain, eyelid droop, headache, dizziness and others such as nausea or vomiting. In the ED, a total of 126 neuroimaging procedures were performed. Ninety-four imagings (74.6%) were normal, while 32 (25.4%) had relevant abnormal findings. Only 2 symptoms were significant for abnormal imaging: blurry vision (p-value= 0.038) and visual field change (p-value= 0.014). While 4 physical exam findings had significant abnormal imaging: visual field defect (p-value= 0.016), abnormal pupil reactivity (p-value= 0.028), afferent pupillary defect (p-value= 0.018), and abnormal optic disc exam (p-value= 0.009). Conclusion: Risk indicators for abnormal neuroimaging in the setting of neuro-ophthalmological emergencies are blurred vision or changes in the visual field on history taking. While visual field irregularities, abnormal pupil reactivity with or without afferent pupillary defect, or abnormal optic discs, are risk factors related to physical testing. These findings, when present, should sway the ED physician towards neuroimaging but still individualizing each case is of utmost importance to prevent time-consuming, resource-draining, and sometimes unnecessary workup. In the end, it suggests a well-structured patient-centered algorithm to be followed by ED physicians.

Keywords: emergency department, neuro-ophthalmology, neuroimaging, risk indicators

Procedia PDF Downloads 177
231 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method

Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa

Abstract:

The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.

Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop

Procedia PDF Downloads 299
230 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 196
229 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 203
228 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics

Procedia PDF Downloads 78
227 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 246
226 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
225 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm

Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera

Abstract:

The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.

Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula

Procedia PDF Downloads 125
224 Exploring Behavioural Biases among Indian Investors: A Qualitative Inquiry

Authors: Satish Kumar, Nisha Goyal

Abstract:

In the stock market, individual investors exhibit different kinds of behaviour. Traditional finance is built on the notion of 'homo economics', which states that humans always make perfectly rational choices to maximize their wealth and minimize risk. That is, traditional finance has concern for how investors should behave rather than how actual investors are behaving. Behavioural finance provides the explanation for this phenomenon. Although finance has been studied for thousands of years, behavioural finance is an emerging field that combines the behavioural or psychological aspects with conventional economic and financial theories to provide explanations on how emotions and cognitive factors influence investors’ behaviours. These emotions and cognitive factors are known as behavioural biases. Because of these biases, investors make irrational investment decisions. Besides, the emotional and cognitive factors, the social influence of media as well as friends, relatives and colleagues also affect investment decisions. Psychological factors influence individual investors’ investment decision making, but few studies have used qualitative methods to understand these factors. The aim of this study is to explore the behavioural factors or biases that affect individuals’ investment decision making. For the purpose of this exploratory study, an in-depth interview method was used because it provides much more exhaustive information and a relaxed atmosphere in which people feel more comfortable to provide information. Twenty investment advisors having a minimum 5 years’ experience in securities firms were interviewed. In this study, thematic content analysis was used to analyse interview transcripts. Thematic content analysis process involves analysis of transcripts, coding and identification of themes from data. Based on the analysis we categorized the statements of advisors into various themes. Past market returns and volatility; preference for safe returns; tendency to believe they are better than others; tendency to divide their money into different accounts/assets; tendency to hold on to loss-making assets; preference to invest in familiar securities; tendency to believe that past events were predictable; tendency to rely on the reference point; tendency to rely on other sources of information; tendency to have regret for making past decisions; tendency to have more sensitivity towards losses than gains; tendency to rely on own skills; tendency to buy rising stocks with the expectation that this rise will continue etc. are some of the major concerns showed by experts about investors. The findings of the study revealed 13 biases such as overconfidence bias, disposition effect, familiarity bias, framing effect, anchoring bias, availability bias, self-attribution bias, representativeness, mental accounting, hindsight bias, regret aversion, loss aversion and herding bias/media biases present in Indian investors. These biases have a negative connotation because they produce a distortion in the calculation of an outcome. These biases are classified under three categories such as cognitive errors, emotional biases and social interaction. The findings of this study may assist both financial service providers and researchers to understand the various psychological biases of individual investors in investment decision making. Additionally, individual investors will also be aware of the behavioural biases that will aid them to make sensible and efficient investment decisions.

Keywords: financial advisors, individual investors, investment decisions, psychological biases, qualitative thematic content analysis

Procedia PDF Downloads 167
223 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 62
222 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 57
221 Suggestions to the Legislation about Medical Ethics and Ethics Review in the Age of Medical Artificial Intelligence

Authors: Xiaoyu Sun

Abstract:

In recent years, the rapid development of Artificial Intelligence (AI) has extensively promoted medicine, pharmaceutical, and other related fields. The medical research and development of artificial intelligence by scientific and commercial organizations are on the fast track. The ethics review is one of the critical procedures of registration to get the products approved and launched. However, the SOPs for ethics review is not enough to guide the healthy and rapid development of artificial intelligence in healthcare in China. Ethical Review Measures for Biomedical Research Involving Human Beings was enacted by the National Health Commission of the People's Republic of China (NHC) on December 1st, 2016. However, from a legislative design perspective, it was neither updated timely nor in line with the trends of AI international development. Therefore, it was great that NHC published a consultation paper on the updated version on March 16th, 2021. Based on the most updated laws and regulations in the States and EU, and in-depth-interviewed 11 subject matter experts in China, including lawmakers, regulators, and key members of ethics review committees, heads of Regulatory Affairs in SaMD industry, and data scientists, several suggestions were proposed on top of the updated version. Although the new version indicated that the Ethics Review Committees need to be created by National, Provincial and individual institute levels, the review authorities of different levels were not clarified. The suggestion is that the precise scope of review authorities for each level should be identified based on Risk Analysis and Management Model, such as the complicated leading technology, gene editing, should be reviewed by National Ethics Review Committees, it will be the job of individual institute Ethics Review Committees to review and approve the clinical study with less risk such as an innovative cream to treat acne. Furthermore, to standardize the research and development of artificial intelligence in healthcare in the age of AI, more clear guidance should be given to data security in the layers of data, algorithm, and application in the process of ethics review. In addition, transparency and responsibility, as two of six principles in the Rome Call for AI Ethics, could be further strengthened in the updated version. It is the shared goal among all countries to manage well and develop AI to benefit human beings. Learned from the other countries who have more learning and experience, China could be one of the most advanced countries in artificial intelligence in healthcare.

Keywords: biomedical research involving human beings, data security, ethics committees, ethical review, medical artificial intelligence

Procedia PDF Downloads 166
220 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 177
219 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 198
218 Analyzing Electromagnetic and Geometric Characterization of Building Insulation Materials Using the Transient Radar Method (TRM)

Authors: Ali Pourkazemi

Abstract:

The transient radar method (TRM) is one of the non-destructive methods that was introduced by authors a few years ago. The transient radar method can be classified as a wave-based non destructive testing (NDT) method that can be used in a wide frequency range. Nevertheless, it requires a narrow band, ranging from a few GHz to a few THz, depending on the application. As a time-of-flight and real-time method, TRM can measure the electromagnetic properties of the sample under test not only quickly and accurately, but also blindly. This means that it requires no prior knowledge of the sample under test. For multi-layer structures, TRM is not only able to detect changes related to any parameter within the multi-layer structure but can also measure the electromagnetic properties of each layer and its thickness individually. Although the temperature, humidity, and general environmental conditions may affect the sample under test, they do not affect the accuracy of the Blind TRM algorithm. In this paper, the electromagnetic properties as well as the thickness of the individual building insulation materials - as a single-layer structure - are measured experimentally. Finally, the correlation between the reflection coefficients and some other technical parameters such as sound insulation, thermal resistance, thermal conductivity, compressive strength, and density is investigated. The sample to be studied is 30 cm x 50 cm and the thickness of the samples varies from a few millimeters to 6 centimeters. This experiment is performed with both biostatic and differential hardware at 10 GHz. Since it is a narrow-band system, high-speed computation for analysis, free-space application, and real-time sensor, it has a wide range of potential applications, e.g., in the construction industry, rubber industry, piping industry, wind energy industry, automotive industry, biotechnology, food industry, pharmaceuticals, etc. Detection of metallic, plastic pipes wires, etc. through or behind the walls are specific applications for the construction industry.

Keywords: transient radar method, blind electromagnetic geometrical parameter extraction technique, ultrafast nondestructive multilayer dielectric structure characterization, electronic measurement systems, illumination, data acquisition performance, submillimeter depth resolution, time-dependent reflected electromagnetic signal blind analysis method, EM signal blind analysis method, time domain reflectometer, microwave, milimeter wave frequencies

Procedia PDF Downloads 68
217 Short Association Bundle Atlas for Lateralization Studies from dMRI Data

Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara

Abstract:

Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.

Keywords: dMRI, hierarchical clustering, lateralization index, tractography

Procedia PDF Downloads 330
216 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions

Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini

Abstract:

This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.

Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing

Procedia PDF Downloads 145
215 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 276