Search results for: prediction modelling
67 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 16666 Towards Visual Personality Questionnaires Based on Deep Learning and Social Media
Authors: Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, Xavier Roca
Abstract:
Image sharing in social networks has increased exponentially in the past years. Officially, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Consequently, there is a need for developing new tools to understand the content expressed in shared images, which will greatly benefit social media communication and will enable broad and promising applications in education, advertisement, entertainment, and also psychology. Following these trends, our work aims to take advantage of the existing relationship between text and personality, already demonstrated by multiple researchers, so that we can prove that there exists a relationship between images and personality as well. To achieve this goal, we consider that images posted on social networks are typically conditioned on specific words, or hashtags, therefore any relationship between text and personality can also be observed with those posted images. Our proposal makes use of the most recent image understanding models based on neural networks to process the vast amount of data generated by social users to determine those images most correlated with personality traits. The final aim is to train a weakly-supervised image-based model for personality assessment that can be used even when textual data is not available, which is an increasing trend. The procedure is described next: we explore the images directly publicly shared by users based on those accompanying texts or hashtags most strongly related to personality traits as described by the OCEAN model. These images will be used for personality prediction since they have the potential to convey more complex ideas, concepts, and emotions. As a result, the use of images in personality questionnaires will provide a deeper understanding of respondents than through words alone. In other words, from the images posted with specific tags, we train a deep learning model based on neural networks, that learns to extract a personality representation from a picture and use it to automatically find the personality that best explains such a picture. Subsequently, a deep neural network model is learned from thousands of images associated with hashtags correlated to OCEAN traits. We then analyze the network activations to identify those pictures that maximally activate the neurons: the most characteristic visual features per personality trait will thus emerge since the filters of the convolutional layers of the neural model are learned to be optimally activated depending on each personality trait. For example, among the pictures that maximally activate the high Openness trait, we can see pictures of books, the moon, and the sky. For high Conscientiousness, most of the images are photographs of food, especially healthy food. The high Extraversion output is mostly activated by pictures of a lot of people. In high Agreeableness images, we mostly see flower pictures. Lastly, in the Neuroticism trait, we observe that the high score is maximally activated by animal pets like cats or dogs. In summary, despite the huge intra-class and inter-class variabilities of the images associated to each OCEAN traits, we found that there are consistencies between visual patterns of those images whose hashtags are most correlated to each trait.Keywords: emotions and effects of mood, social impact theory in social psychology, social influence, social structure and social networks
Procedia PDF Downloads 19865 Management of Myofascial Temporomandibular Disorder in Secondary Care: A Quality Improvement Project
Authors: Rishana Bilimoria, Selina Tang, Sajni Shah, Marianne Henien, Christopher Sproat
Abstract:
Temporomandibular disorders (TMD) may affect up to a third of the general population, and there is evidence demonstrating the majority of Myofascial TMD cases improve after education and conservative measures. In 2015 our department implemented a modified care pathway for myofascial TMD patients in an attempt to improve the patient journey. This involved the use of an interactive group therapy approach to deliver education, reinforce conservative measures and promote self-management. Patient reported experience measures from the new group clinic revealed 71% patient satisfaction. This service is efficient in improving aspects of health status while reducing health-care costs and redistributing clinical time. Since its’ establishment, 52 hours of clinical time, resources and funding have been redirected effectively. This Quality Improvement Project was initiated because it was felt that this new service was being underutilised by our surgical teams. The ‘Plan-Do-Study-Act cycle’ (PDSA) framework was employed to analyse utilisation of the service: The ‘plan’ stage involved outlining our aims: to raise awareness amongst clinicians of the unified care pathway and to increase referral to this clinic. The ‘do’ stage involved collecting data from a sample of 96 patients over 4 month period to ascertain the proportion of Myofascial TMD patients who were correctly referred to the designated clinic. ‘Suitable’ patients who weren’t referred were identified. The ‘Study’ phase involved analysis of results, which revealed that 77% of suitable patients weren’t referred to the designated clinic. They were reviewed on other clinics, which are often overbooked, or managed by junior staff members. This correlated with our original prediction. Barriers to referral included: lack of awareness of the clinic, individual consultant treatment preferences and patient, reluctance to be referred to a ‘group’ clinic. The ‘Act’ stage involved presenting our findings to the team at a clinical governance meeting. This included demonstration of the clinical effectiveness of the care-pathway and explaining the referral route and criteria. In light of the evaluation results, it was decided to keep the group clinic and maximize utilisation. The second cycle of data collection following these changes revealed that of 66 Myofascial TMD patients over a 4 month period, only 9% of suitable patients were not seen via the designated pathway; therefore this QIP was successful in meeting the set objectives. Overall, employing the PDSA cycle in this QIP resulted in appropriate utilisation of the modified care pathway for patients with myofascial TMD in Guy’s Oral Surgery Department. In turn, this leads to high patient satisfaction with the service and effectively redirected 52 hours of clinical time. It permitted adoption of a collaborative working style with oral surgery colleagues to investigate problems, identify solutions, and collectively raise standards of clinical care to ensure we adopt a unified care pathway in secondary care management of Myofascial TMD patients.Keywords: myofascial, quality Improvement, PDSA, TMD
Procedia PDF Downloads 14164 Measuring Entrepreneurship Intentions among Nigerian University Graduates: A Structural Equation Modeling Technique
Authors: Eunice Oluwakemi Chukwuma-Nwuba
Abstract:
Nigeria is a developing country with an increasing rate of graduate unemployment. This has triggered successive government administrations to promote the variety of programmes to address the situation. However, none of these efforts yielded the desired outcome. Accordingly, in 2006 the government included entrepreneurship module in the curriculum of universities as a compulsory general programme for all undergraduate courses. This is in the hope that the programme will help to promote entrepreneurial mind-set and new venture creation among graduates and as a result reduce the rate of graduate unemployment. The study explores the effectiveness of entrepreneurship education in promoting entrepreneurship. This study is significant in view of the endemic graduate unemployment in Nigeria and the social consequences such as youth restiveness and militancy. It is guided by the theory of planned behaviour. It employed the two-stage structural equation modelling (AMOS) to model entrepreneurial intentions as a function of innovative teaching methods, traditional teaching methods and culture Personal attitude and subjective norm are proposed to mediate the relationships between the exogenous and the endogenous variables. The first stage was tested using multi-group confirmatory factor analysis (MGCFA) framework to confirm that the two groups assign the same meaning to the scale items and to obtain goodness-of-fit indices. The multi-group confirmatory factor analysis included the tests of configural, metric and scalar invariance. With the attainment of full configural invariance and partial metric and scalar invariance, the second stage – the structural model was applied hypothesising that, the entrepreneurial intentions of graduates (respondents who have participated in the compulsory entrepreneurship programme) will be higher than those of undergraduates (respondents who are yet to participate in the programme). The study uses the quasi-experimental design. The samples comprised 409 graduates (experimental group) and 402 undergraduates (control group) from six federal universities in Nigeria. Our findings suggest that personal attitude is positively related with entrepreneurial intentions, largely confirming prior literature. However, unlike previous studies, our results indicate that subjective norm has significant direct and indirect impact on entrepreneurial intentions indicating that reference people of the participants have important roles to play in their decision to be entrepreneurial. Furthermore, unlike the assertions in prior studies, the result suggests that traditional teaching methods have indirect effect on entrepreneurial intentions supporting that since personal characteristics can change in an educational situation, an education purposively directed at entrepreneurship might achieve similar results if not better. This study has implication for practice and theory. The research extends to the theoretical understanding of the formation of entrepreneurial intentions and explains the role of the reference others in relation to how graduates perceive entrepreneurship. Further, the study adds to the body of knowledge on entrepreneurship education in Nigeria universities and provides a developing country perspective. It proposes further research in the exploration of entrepreneurship education and entrepreneurial intentions of graduates from across the country’s universities as necessary and imperative.Keywords: entrepreneurship education, entrepreneurial intention, structural equation modeling, theory of planned behaviour
Procedia PDF Downloads 26063 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid
Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero
Abstract:
The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment
Procedia PDF Downloads 14562 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle
Authors: Yury S. Shpanskiy, Boris V. Kuteev
Abstract:
Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle
Procedia PDF Downloads 14761 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19660 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples
Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann
Abstract:
Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide
Procedia PDF Downloads 25959 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 10758 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities
Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis
Abstract:
New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.Keywords: community resilience, problem based learning, project based learning, case study
Procedia PDF Downloads 29057 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models
Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi
Abstract:
Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel
Procedia PDF Downloads 18156 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 13655 Angiopermissive Foamed and Fibrillar Scaffolds for Vascular Graft Applications
Authors: Deon Bezuidenhout
Abstract:
Pre-seeding with autologous endothelial cells improves the long-term patency of synthetic vascular grafts levels obtained with autografts, but is limited to a single centre due to resource, time and other constraints. Spontaneous in vivo endothelialization would obviate the need for pre-seeding, but has been shown to be absent in man due to limited transanastomotic and fallout healing, and the lack of transmural ingrowth due to insufficient porosity. Two types of graft scaffolds with increased interconnected porosity for improved tissue ingrowth and healing are thus proposed and described. Foam-type polyurethane (PU) scaffolds with small, medium and large, interconnected pores were made by phase inversion and spherical porogen extraction, with and without additional surface modification with covalently attached heparin and subsequent loading with and delivery of growth factors. Fibrillar scaffolds were made either by standard electrospinning using degradable PU (Degrapol®), or by dual electrospinning using non-degradable PU. The latter process involves sacrificial fibres that are co-spun with structural fibres and subsequently removed to increased porosity and pore size. Degrapol samples were subjected to in vitro degradation, and all scaffold types were evaluated in vivo for tissue ingrowth and vascularization using rat subcutaneous model. The foam scaffolds were additionally evaluated in a circulatory (rat infrarenal aortic interposition) model that allows for the grafts to be anastomotically and/or ablumenally isolated to discern and determine endothelialization mode. Foam-type grafts with large (150 µm) pores showed improved subcutaneous healing in terms of vascularization and inflammatory response over smaller pore sizes (60 and 90µm), and vascularization of the large porosity scaffolds was significantly increased by more than 70% by heparin modification alone, and by 150% to 400% when combined with growth factors. In the circulatory model, extensive transmural endothelialization (95±10% at 12 w) was achieved. Fallout healing was shown to be sporadic and limited in groups that were ablumenally isolated to prevent transmural ingrowth (16±30% wrapped vs. 80±20% control; p<0.002). Heparinization and GF delivery improved both mural vascularization and lumenal endothelialization. Degrapol electrospun scaffolds showed decrease in molecular mass and corresponding tensile strength over the first 2 weeks, but very little decrease in mass over the 4w test period. Studies on the effect of tissue ingrowth with and without concomitant degradation of the scaffolds, are being used to develop material models for the finite element modelling. In the case of the dual-spun scaffolds, the PU fibre fraction could be controlled shown to vary linearly with porosity (P = −0.18FF +93.5, r2=0.91), which in turn showed inverse linear correlation with tensile strength and elastic modulus (r2 > 0.96). Calculated compliance and burst pressures of the scaffolds increased with fibre fraction, and compliances matching the human popliteal artery (5-10 %/100 mmHg), and high burst pressures (> 2000 mmHg) could be achieved. Increasing porosity (76 to 82 and 90%) resulted in increased tissue ingrowth from 33±7 to 77±20 and 98±1% after 28d. Transmural endothelialization of highly porous foamed grafts is achievable in a circulatory model, and the enhancement of porosity and tissue ingrowth may hold the key the development of spontaneously endothelializing electrospun grafts.Keywords: electrospinning, endothelialization, porosity, scaffold, vascular graft
Procedia PDF Downloads 29654 Introducing Transport Engineering through Blended Learning Initiatives
Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi
Abstract:
Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.Keywords: blended learning, highway design, teaching, transport planning
Procedia PDF Downloads 14953 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 7252 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts
Authors: Maria Ledinskaya
Abstract:
This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management
Procedia PDF Downloads 7151 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 11450 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data
Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann
Abstract:
Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers
Procedia PDF Downloads 20649 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 28048 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan
Authors: Hsien-Te Lin
Abstract:
The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy
Procedia PDF Downloads 13947 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection
Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy
Abstract:
Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks
Procedia PDF Downloads 7546 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors
Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov
Abstract:
Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model
Procedia PDF Downloads 21845 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 7244 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters
Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher
Abstract:
The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with aKeywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy
Procedia PDF Downloads 12643 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology
Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais
Abstract:
The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology
Procedia PDF Downloads 31442 Sustainability Framework for Water Management in New Zealand's Canterbury Region
Authors: Bryan Jenkins
Abstract:
Introduction: The expansion of irrigation in the Canterbury region has led to the sustainability limits being reached for water availability and the cumulative effects of land use intensification. The institutional framework under New Zealand’s Resource Management Act was found to be an inadequate basis for managing water at sustainability limits. An alternative paradigm for water management was developed based on collaborative governance and nested adaptive systems. This led to the formulation and implementation of the Canterbury Water Management Strategy. Methods: The nested adaptive system approach was adopted. Sustainability issues were identified at multiple spatial and time scales and defined potential failure pathways for the water resource system. These included biophysical and socio-economic issues such as water availability, cumulative effects on water quality due to land use intensification, projected changes in climate, public health, institutional arrangements, economic outcomes and externalities, and, social effects of changing technology. This led to the derivation of sustainability strategies to address these failure pathways. The collaborative governance approach involved stakeholder participation and community engagement to decide on a regional strategy; regional and zone committees of community and rūnanga (Māori groups) members to develop implementation programmes for the strategy; and, farmer collectives for operational management. Findings: The strategy identified improvements in the efficiency of use of water already allocated was more effective in improving water availability than a reliance on increased storage alone. New forms of storage with less adverse impacts were introduced, such as managed aquifer recharge and off-river storage. Reductions of nutrients from land use intensification by improving management practices has been a priority. Solutions packages for addressing the degradation of vulnerable lakes and rivers have been prepared. Biodiversity enhancement projects have been initiated. Greater involvement of Māori has led to the incorporation of kaitiakitanga (resource stewardship) into implementation programmes. Emerging issues are the need for improved integration of surface water and groundwater interactions, increased use of modelling of water and financial outcomes to guide decision making, and, equity in allocation among existing users as well as between existing and future users. Conclusions: However, sustainability analysis indicates that the proposed levels of management interventions are not sufficient to achieve community targets for water management. There is a need for more proactive recovery and rehabilitation measures. Managing to environmental limits is not sufficient, rather managing adaptive cycles is needed. Better measurement and management of water use efficiency is required. Proposed implementation packages are not sufficient to deliver desired water quality outcomes. Greater attention to targets important to environmental and recreational interests is needed to maintain trust in the collaborative process. Implementation programmes don’t adequately address climate change adaptations and greenhouse gas mitigation. Affordability is a constraint on adaptive capacity of farmers and communities. More funding mechanisms are required to implement proactive measures. The legislative and institutional framework needs to be changed to incorporate water framework legislation, regional sustainability strategies and water infrastructure coordination.Keywords: collaborative governance, irrigation management, nested adaptive systems, sustainable water management
Procedia PDF Downloads 15941 On the Bias and Predictability of Asylum Cases
Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats
Abstract:
An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.Keywords: asylum adjudications, automated decision-making, machine learning, text mining
Procedia PDF Downloads 9640 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products
Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet
Abstract:
All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis
Procedia PDF Downloads 18939 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours
Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal
Abstract:
Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography
Procedia PDF Downloads 8038 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation
Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne
Abstract:
In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network
Procedia PDF Downloads 147