Search results for: curl operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 430

Search results for: curl operator

40 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 387
39 A Study on the Current State and Policy Implications of Engineer Operated National Research Facility and Equipment in Korea

Authors: Chang-Yong Kim, Dong-Woo Kim, Whon-Hyun Lee, Yong-Joo Kim, Tae-Won Chung, Kyung-Mi Lee, Han-Sol Kim, Eun-Joo Lee, Euh Duck Jeong

Abstract:

In the past, together with the annual increase in investment on national R&D projects, the government’s budget investment in FE has steadily maintained. In the case of major developed countries, R&D and its supporting works are distinguished and professionalized in their own right, in so far as having a training system for facilities, equipment operation, and maintenance personnel. In Korea, however, research personnel conduct both research and equipment operation, leading to quantitative shortages of operational manpower and qualitative problems due to insecure employment such as maintenance issues or the loss of effectiveness of necessary equipment. Therefore, the purpose of this study was to identify the current status of engineer operated national research FE in Korea based on a 2017 survey results of domestic facilities and to suggest policy implications. A total of 395 research institutes that carried out national R&D projects and registered more than two FE since 2005 were surveyed on-line for two months. The survey showed that 395 non-profit research facilities were operating 45,155 pieces of equipment with 2,211 engineer operated national research FE, meaning that each engineer had to manage 21 items of FE. Among these, 43.9% of the workers were employed in temporary positions, including indefinite term contracts. Furthermore, the salary and treatment of the engineer personnel were relatively low compared to researchers. In short, engineers who exclusively focused on managing and maintaining FE play a very important role in increasing research immersion and obtaining highly reliable research results. Moreover, institutional efforts and government support for securing operators are severely lacking as domestic national R&D policies are mostly focused on researchers. The 2017 survey on FE also showed that 48.1% of all research facilities did not even employ engineers. In order to solve the shortage of the engineer personnel, the government will start the pilot project in 2012, and then only the 'research equipment engineer training project' from 2013. Considering the above, a national long-term manpower training plan that addresses the quantitative and qualitative shortage of operators needs to be established through a study of the current situation. In conclusion, the findings indicate that this should not only include a plan which connects training to employment but also measures the creation of additional jobs by re-defining and re-establishing operator roles and improving working conditions.

Keywords: engineer, Korea, maintenance, operation, research facilities and equipment

Procedia PDF Downloads 191
38 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 605
37 Development of a Test Plant for Parabolic Trough Solar Collectors Characterization

Authors: Nelson Ponce Jr., Jonas R. Gazoli, Alessandro Sete, Roberto M. G. Velásquez, Valério L. Borges, Moacir A. S. de Andrade

Abstract:

The search for increased efficiency in generation systems has been of great importance in recent years to reduce the impact of greenhouse gas emissions and global warming. For clean energy sources, such as the generation systems that use concentrated solar power technology, this efficiency improvement impacts a lower investment per kW, improving the project’s viability. For the specific case of parabolic trough solar concentrators, their performance is strongly linked to their geometric precision of assembly and the individual efficiencies of their main components, such as parabolic mirrors and receiver tubes. Thus, for accurate efficiency analysis, it should be conducted empirically, looking for mounting and operating conditions like those observed in the field. The Brazilian power generation and distribution company Eletrobras Furnas, through the R&D program of the National Agency of Electrical Energy, has developed a plant for testing parabolic trough concentrators located in Aparecida de Goiânia, in the state of Goiás, Brazil. The main objective of this test plant is the characterization of the prototype concentrator that is being developed by the company itself in partnership with Eudora Energia, seeking to optimize it to obtain the same or better efficiency than the concentrators of this type already known commercially. This test plant is a closed pipe system where a pump circulates a heat transfer fluid, also calledHTF, in the concentrator that is being characterized. A flow meter and two temperature transmitters, installed at the inlet and outlet of the concentrator, record the parameters necessary to know the power absorbed by the system and then calculate its efficiency based on the direct solar irradiation available during the test period. After the HTF gains heat in the concentrator, it flows through heat exchangers that allow the acquired energy to be dissipated into the ambient. The goal is to keep the concentrator inlet temperature constant throughout the desired test period. The developed plant performs the tests in an autonomous way, where the operator must enter the HTF flow rate in the control system, the desired concentrator inlet temperature, and the test time. This paper presents the methodology employed for design and operation, as well as the instrumentation needed for the development of a parabolic trough test plant, being a guideline for standardization facilities.

Keywords: parabolic trough, concentrated solar power, CSP, solar power, test plant, energy efficiency, performance characterization, renewable energy

Procedia PDF Downloads 119
36 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 155
35 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 79
34 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 138
33 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 190
32 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 186
31 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 144
30 Development of a Novel Clinical Screening Tool, Using the BSGE Pain Questionnaire, Clinical Examination and Ultrasound to Predict the Severity of Endometriosis Prior to Laparoscopic Surgery

Authors: Marlin Mubarak

Abstract:

Background: Endometriosis is a complex disabling disease affecting young females in the reproductive period mainly. The aim of this project is to generate a diagnostic model to predict severity and stage of endometriosis prior to Laparoscopic surgery. This will help to improve the pre-operative diagnostic accuracy of stage 3 & 4 endometriosis and as a result, refer relevant women to a specialist centre for complex Laparoscopic surgery. The model is based on the British Society of Gynaecological Endoscopy (BSGE) pain questionnaire, clinical examination and ultrasound scan. Design: This is a prospective, observational, study, in which women completed the BSGE pain questionnaire, a BSGE requirement. Also, as part of the routine preoperative assessment patient had a routine ultrasound scan and when recto-vaginal and deep infiltrating endometriosis was suspected an MRI was performed. Setting: Luton & Dunstable University Hospital. Patients: Symptomatic women (n = 56) scheduled for laparoscopy due to pelvic pain. The age ranged between 17 – 52 years of age (mean 33.8 years, SD 8.7 years). Interventions: None outside the recognised and established endometriosis centre protocol set up by BSGE. Main Outcome Measure(s): Sensitivity and specificity of endometriosis diagnosis predicted by symptoms based on BSGE pain questionnaire, clinical examinations and imaging. Findings: The prevalence of diagnosed endometriosis was calculated to be 76.8% and the prevalence of advanced stage was 55.4%. Deep infiltrating endometriosis in various locations was diagnosed in 32/56 women (57.1%) and some had DIE involving several locations. Logistic regression analysis was performed on 36 clinical variables to create a simple clinical prediction model. After creating the scoring system using variables with P < 0.05, the model was applied to the whole dataset. The sensitivity was 83.87% and specificity 96%. The positive likelihood ratio was 20.97 and the negative likelihood ratio was 0.17, indicating that the model has a good predictive value and could be useful in predicting advanced stage endometriosis. Conclusions: This is a hypothesis-generating project with one operator, but future proposed research would provide validation of the model and establish its usefulness in the general setting. Predictive tools based on such model could help organise the appropriate investigation in clinical practice, reduce risks associated with surgery and improve outcome. It could be of value for future research to standardise the assessment of women presenting with pelvic pain. The model needs further testing in a general setting to assess if the initial results are reproducible.

Keywords: deep endometriosis, endometriosis, minimally invasive, MRI, ultrasound.

Procedia PDF Downloads 355
29 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 100
28 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 173
27 Necessity for a Standardized Occupational Health and Safety Management System: An Exploratory Study from the Danish Offshore Wind Sector

Authors: Dewan Ahsan

Abstract:

Denmark is well ahead in generating electricity from renewable sources. The offshore wind sector is playing the pivotal role to achieve this target. Though there is a rapid growth of offshore wind sector in Denmark, still there is a dearth of synchronization in OHS (occupational health and safety) regulation and standards. Therefore, this paper attempts to ascertain: i) what are the major challenges of the company specific OHS standards? ii) why does the offshore wind industry need a standardized OHS management system? and iii) who can play the key role in this process? To achieve these objectives, this research applies the interview and survey techniques. This study has identified several key challenges in OHS management system which are; gaps in coordination and communication among the stakeholders, gaps in incident reporting systems, absence of a harmonized OHS standard and blame culture. Furthermore, this research has identified eleven key stakeholders who are actively involve with the offshore wind business in Denmark. As noticed, the relationships among these stakeholders are very complex specially between operators and sub-contractors. The respondent technicians are concerned with the compliance of various third-party OHS standards (e.g. ISO 31000, ISO 29400, Good practice guidelines by G+) which are applying by various offshore companies. On top of these standards, operators also impose their own OHS standards. From the technicians point of angle, many of these standards are not even specific for the offshore wind sector. So, it is a big challenge for the technicians and sub-contractors to comply with different company specific standards which also elevate the price of their services offer to the operators. For instance, when a sub-contractor is competing for a bidding, it must fulfill a number of OHS requirements (which demands many extra documantions) set by the individual operator and/the turbine supplier. According to sub-contractors’ point of view these extra works consume too much time to prepare the bidding documents and they also need to train their employees to pass the specific OHS certification courses to accomplish the demand for individual clients and individual project. The sub-contractors argued that in many cases these extra documentations and OHS certificates are inessential to ensure the quality service. So, a standardized OHS management procedure (which could be applicable for all the clients) can easily solve this problem. In conclusion, this study highlights that i) development of a harmonized OHS standard applicable for all the operators and turbine suppliers, ii) encouragement of technicians’ active participation in the OHS management, iii) development of a good safety leadership, and, iv) sharing of experiences among the stakeholders (specially operators-operators-sub contractors) are the most vital strategies to overcome the existing challenges and to achieve the goal of 'zero accident/harm' in the offshore wind industry.

Keywords: green energy, offshore, safety, Denmark

Procedia PDF Downloads 215
26 Open Joint Surgery for Temporomandibular Joint Internal Derangement: Wilkes Stages III-V

Authors: T. N. Goh, M. Hashmi, O. Hussain

Abstract:

Temporomandibular joint (TMJ) dysfunction (TMD) is a condition that may affect patients via restricted mouth opening, significant pain during normal functioning, and/or reproducible joint noise. TMD includes myofascial pain, TMJ functional derangements (internal derangement, dislocation), and TMJ degenerative/inflammatory joint disease. Internal derangement (ID) is the most common cause of TMD-related clicking and locking. These patients are managed in a stepwise approach, from patient education (homecare advice and analgesia), splint therapy, physiotherapy, botulinum toxin treatment, to arthrocentesis. Arthrotomy is offered when the aforementioned treatment options fail to alleviate symptoms and improve quality of life. The aim of this prospective study was to review the outcomes of jaw joint open surgery in TMD patients. Patients who presented from 2015-2022 at the Oral and Maxillofacial Surgery Department in the Doncaster NHS Foundation Trust, UK, with a Wilkes classification of III -V were included. These patients underwent either i) discopexy with bone-anchoring suture (9); ii) intrapositional temporalis flap (ITF) with bone-anchoring suture (3); iii) eminoplasty and discopexy with suturing to the capsule (3); iii) discectomy + ITF with bone-anchoring suture (1); iv) discoplasty + bone-anchoring suture (1); v) ITF (1). Maximum incisal opening (MIO) was assessed pre-operatively and at each follow-up. Pain score, determined via the visual analogue scale (VAS, with 0 being no pain and 10 being the worst pain), was also recorded. A total of 18 eligible patients were identified with a mean age of 45 (range 22 - 79), of which 16 were female. The patients were scored by Wilkes Classification as III (14), IV (1), or V (4). Twelve patients had anterior disc displacement without reduction (66%) and six had degenerative/arthritic changes (33%) to the TMJ. The open joint procedure resulted in an increase in MIO and reduction in pain VAS and for the majority of patients, across all Wilkes Classifications. Pre-procedural MIO was 22.9 ± 7.4 mm and VAS was 7.8 ± 1.5. At three months post-procedure there was an increase in MIO to 34.4 ± 10.4 mm (p < 0.01) and a decrease in the VAS to 1.5 ± 2.9 (p < 0.01). Three patients were lost to follow-up prior to six months. Six were discharged at six month review and five patients were discharged at 12 months review as they were asymptomatic with good mouth opening. Four patients are still attending for annual botulinum toxin treatment. Two patients (Wilkes III and V) subsequently underwent TMJ replacement (11%). One of these patients (Wilkes III) had improvement initially to MIO of 40 mm, but subsequently relapsed to less than 20 mm due to lack of compliance with jaw rehabilitation device post-operatively. Clinical improvements in 89% of patients within the study group were found, with a return to near normal MIO range and reduced pain score. Intraoperatively, the operator found bone-anchoring suture used for discopexy/discoplasty more secure than the soft tissue anchoring suturing technique.

Keywords: bone anchoring suture, open temporomandibular joint surgery, temporomandibular joint, temporomandibular joint dysfunction

Procedia PDF Downloads 106
25 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid

Authors: Benjamin Blat Belmonte, Stephan Rinderknecht

Abstract:

The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.

Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market

Procedia PDF Downloads 76
24 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 203
23 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.

Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment

Procedia PDF Downloads 78
22 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 147
21 Direct Current Grids in Urban Planning for More Sustainable Urban Energy and Mobility

Authors: B. Casper

Abstract:

The energy transition towards renewable energies and drastically reduced carbon dioxide emissions in Germany drives multiple sectors into a transformation process. Photovoltaic and on-shore wind power are predominantly feeding in the low and medium-voltage grids. The electricity grid is not laid out to allow an increasing feed-in of power in low and medium voltage grids. Electric mobility is currently in the run-up phase in Germany and still lacks a significant amount of charging stations. The additional power demand by e-mobility cannot be supplied by the existing electric grids in most cases. The future demands in heating and cooling of commercial and residential buildings are increasingly generated by heat-pumps. Yet the most important part in the energy transition is the storage of surplus energy generated by photovoltaic and wind power sources. Water electrolysis is one way to store surplus energy known as power-to-gas. With the vehicle-to-grid technology, the upcoming fleet of electric cars could be used as energy storage to stabilize the grid. All these processes use direct current (DC). The demand of bi-directional flow and higher efficiency in the future grids can be met by using DC. The Flexible Electrical Networks (FEN) research campus at RWTH Aachen investigates interdisciplinary about the advantages, opportunities, and limitations of DC grids. This paper investigates the impact of DC grids as a technological innovation on the urban form and urban life. Applying explorative scenario development, analyzation of mapped open data sources on grid networks and research-by-design as a conceptual design method, possible starting points for a transformation to DC medium voltage grids could be found. Several fields of action have emerged in which DC technology could become a catalyst for future urban development: energy transition in urban areas, e-mobility, and transformation of the network infrastructure. The investigation shows a significant potential to increase renewable energy production within cities with DC grids. The charging infrastructure for electric vehicles will predominantly be using DC in the future because fast and ultra fast charging can only be achieved with DC. Our research shows that e-mobility, combined with autonomous driving has the potential to change the urban space and urban logistics fundamentally. Furthermore, there are possible win-win-win solutions for the municipality, the grid operator and the inhabitants: replacing overhead transmission lines by underground DC cables to open up spaces in contested urban areas can lead to a positive example of how the energy transition can contribute to a more sustainable urban structure. The outlook makes clear that target grid planning and urban planning will increasingly need to be synchronized.

Keywords: direct current, e-mobility, energy transition, grid planning, renewable energy, urban planning

Procedia PDF Downloads 129
20 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach

Authors: Sina Kazemi, Farshid Torabi, Todd Peterson

Abstract:

Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.

Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity

Procedia PDF Downloads 87
19 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 138
18 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.

Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation

Procedia PDF Downloads 145
17 Progress Towards Optimizing and Standardizing Fiducial Placement Geometry in Prostate, Renal, and Pancreatic Cancer

Authors: Shiva Naidoo, Kristena Yossef, Grimm Jimm, Mirza Wasique, Eric Kemmerer, Joshua Obuch, Anand Mahadevan

Abstract:

Background: Fiducial markers effectively enhance tumor target visibility prior to Stereotactic Body Radiation Therapy or Proton therapy. To streamline clinical practice, fiducial placement guidelines from a robotic radiosurgery vendor were examined with the goals of optimizing and standardizing feasible geometries for each treatment indication. Clinical examples of prostate, renal, and pancreatic cases are presented. Methods: Vendor guidelines (Accuray, Sunnyvale, Ca) suggest implantation of 4–6 fiducials at least 20 mm apart, with at least a 15-degree angular difference between fiducials, within 50 mm or less from the target centroid, to ensure that any potential fiducial motion (e.g., from respiration or abdominal/pelvic pressures) will mimic target motion. Also recommended is that all fiducials can be seen in 45-degree oblique views with no overlap to coincide with the robotic radiosurgery imaging planes. For the prostate, a standardized geometry that meets all these objectives is a 2 cm-by-2 cm square in the coronal plane. The transperineal implant of two pairs of preloaded tandem fiducials makes the 2 cm-by-2 cm square geometry clinically feasible. This technique may be applied for renal cancer, except repositioned in a sagittal plane, with the retroperitoneal placement of the fiducials into the tumor. Pancreatic fiducial placement via endoscopic ultrasound (EUS) is technically more challenging, as fiducial placement is operator-dependent, and lesion access may be limited by adjacent vasculature, tumor location, or restricted mobility of the EUS probe in the duodenum. Fluoroscopically assisted fiducial placement during EUS can help ensure fiducial markers are deployed with optimal geometry and visualization. Results: Among the first 22 fiducial cases on a newly installed robotic radiosurgery system, live x-ray images for all nine prostatic cases had excellent fiducial visualization at the treatment console. Renal and pancreatic fiducials were not as clearly visible due to difficult target access and smaller caliber insertion needle/fiducial usage. The geometry of the first prostate case was used to ensure accurate geometric marker placement for the remaining 8 cases. Initially, some of the renal and pancreatic fiducials were closer than the 20 mm recommendation, and interactive feedback with the proceduralists led to subsequent fiducials being too far to the edge of the tumor. Further feedback and discussion of all cases are being used to help guide standardized geometries and achieve ideal fiducial placement. Conclusion: The ideal tradeoffs of fiducial visibility versus the thinnest possible gauge needle to avoid complications needs to be systematically optimized among all patients, particularly in regards to body habitus. Multidisciplinary collaboration among proceduralists and radiation oncologists can lead to improved outcomes.

Keywords: fiducial, prostate cancer, renal cancer, pancreatic cancer, radiotherapy

Procedia PDF Downloads 93
16 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 214
15 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach

Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva

Abstract:

The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.

Keywords: analog ensemble, electricity market, PV forecast, solar energy

Procedia PDF Downloads 159
14 Development of an Systematic Design in Evaluating Force-On-Force Security Exercise at Nuclear Power Plants

Authors: Seungsik Yu, Minho Kang

Abstract:

As the threat of terrorism to nuclear facilities is increasing globally after the attacks of September 11, we are striving to recognize the physical protection system and strengthen the emergency response system. Since 2015, Korea has implemented physical protection security exercise for nuclear facilities. The exercise should be carried out with full cooperation between the operator and response forces. Performance testing of the physical protection system should include appropriate exercises, for example, force-on-force exercises, to determine if the response forces can provide an effective and timely response to prevent sabotage. Significant deficiencies and actions taken should be reported as stipulated by the competent authority. The IAEA(International Atomic Energy Agency) is also preparing force-on-force exercise program documents to support exercise of member states. Currently, ROK(Republic of Korea) is implementing exercise on the force-on-force exercise evaluation system which is developed by itself for the nuclear power plant, and it is necessary to establish the exercise procedure considering the use of the force-on-force exercise evaluation system. The purpose of this study is to establish the work procedures of the three major organizations related to the force-on-force exercise of nuclear power plants in ROK, which conduct exercise using force-on-force exercise evaluation system. The three major organizations are composed of licensee, KINAC (Korea Institute of Nuclear Nonproliferation and Control), and the NSSC(Nuclear Safety and Security Commission). Major activities are as follows. First, the licensee establishes and conducts an exercise plan, and when recommendations are derived from the result of the exercise, it prepares and carries out a force-on-force result report including a plan for implementation of the recommendations. Other detailed tasks include consultation with surrounding units for adversary, interviews with exercise participants, support for document evaluation, and self-training to improve the familiarity of the MILES (Multiple Integrated Laser Engagement System). Second, KINAC establishes a force-on-force exercise plan review report and reviews the force-on-force exercise plan report established by licensee. KINAC evaluate force-on-force exercise using exercise evaluation system and prepare training evaluation report. Other detailed tasks include MILES training, adversary consultation, management of exercise evaluation systems, and analysis of exercise evaluation results. Finally, the NSSC decides whether or not to approve the force-on-force exercise and makes a correction request to the nuclear facility based on the exercise results. The most important part of ROK's force-on-force exercise system is the analysis through the exercise evaluation system implemented by KINAC after the exercise. The analytical method proceeds in the order of collecting data from the exercise evaluation system and analyzing the collected data. The exercise application process of the exercise evaluation system introduced in ROK in 2016 will be concretely set up, and a system will be established to provide objective and consistent conclusions between exercise sessions. Based on the conclusions drawn up, the ultimate goal is to complement the physical protection system of licensee so that the system makes licensee respond effectively and timely against sabotage or unauthorized removal of nuclear materials.

Keywords: Force-on-Force exercise, nuclear power plant, physical protection, sabotage, unauthorized removal

Procedia PDF Downloads 142
13 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas

Authors: A. Odoom, A. Salama, H. Ibrahim

Abstract:

Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.

Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model

Procedia PDF Downloads 144
12 Maintaining Energy Security in Natural Gas Pipeline Operations by Empowering Process Safety Principles Through Alarm Management Applications

Authors: Huseyin Sinan Gunesli

Abstract:

Process Safety Management is a disciplined framework for managing the integrity of systems and processes that handle hazardous substances. It relies on good design principles, well-implemented automation systems, and operating and maintenance practices. Alarm Management Systems play a critically important role in the safe and efficient operation of modern industrial plants. In that respect, Alarm Management is one of the critical factors feeding the safe operations of the plants in the manner of applying effective process safety principles. Trans Anatolian Natural Gas Pipeline (TANAP) is part of the Southern Gas Corridor, which extends from the Caspian Sea to Italy. TANAP transports Natural Gas from the Shah Deniz gas field of Azerbaijan, and possibly from other neighboring countries, to Turkey and through Trans Adriatic Pipeline (TAP) Pipeline to Europe. TANAP plays a crucial role in maintaining Energy Security for the region and Europe. In that respect, the application of Process Safety principles is vital to deliver safe, reliable and efficient Natural Gas delivery to Shippers both in the region and Europe. Effective Alarm Management is one of those Process Safety principles which feeds safe operations of the TANAP pipeline. Alarm Philosophy was designed and implemented in TANAP Pipeline according to the relevant standards. However, it is essential to manage the alarms received in the control room effectively to maintain safe operations. In that respect, TANAP has commenced Alarm Management & Rationalization program as of February 2022 after transferring to Plateau Regime, reaching the design parameters. While Alarm Rationalization started, there were more than circa 2300 alarms received per hour from one of the compressor stations. After applying alarm management principles such as reviewing and removal of bad actors, standing, stale, chattering, fleeting alarms, comprehensive review and revision of alarm set points through a change management principle, conducting alarm audits/design verification and etc., it has been achieved to reduce down to circa 40 alarms per hour. After the successful implementation of alarm management principles as specified above, the number of alarms has been reduced to industry standards. That significantly improved operator vigilance to focus on mainly important and critical alarms to avoid any excursion beyond safe operating limits leading to any potential process safety events. Following the ‟What Gets Measured, Gets Managed” principle, TANAP has identified key Performance Indicators (KPIs) to manage Process Safety principles effectively, where Alarm Management has formed one of the key parameters of those KPIs. However, review and analysis of the alarms were performed manually. Without utilizing Alarm Management Software, achieving full compliance with international standards is almost infeasible. In that respect, TANAP has started using one of the industry-wide known Alarm Management Applications to maintain full review and analysis of alarms and define actions as required. That actually significantly empowered TANAP’s process safety principles in terms of Alarm Management.

Keywords: process safety principles, energy security, natural gas pipeline operations, alarm rationalization, alarm management, alarm management application

Procedia PDF Downloads 104
11 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 67