Search results for: Grid computing
89 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids
Authors: Laith F. Gulli, Nicole M. Mallory
Abstract:
The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.Keywords: psychometric, parametric, domains, grids, therapists
Procedia PDF Downloads 27988 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 20887 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow
Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden
Abstract:
The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines
Procedia PDF Downloads 8886 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 9985 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 33384 Ethical Decision-Making in AI and Robotics Research: A Proposed Model
Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet
Abstract:
Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.Keywords: ethical decision making, artificial intelligence, robotics, research
Procedia PDF Downloads 7983 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 37782 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.Keywords: censored data, R statistical software, reliability analysis, time to failure
Procedia PDF Downloads 40181 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 30880 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture
Authors: Kimberly Samaha
Abstract:
The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.Keywords: bio-economy, biomass energy, financing, zero-waste
Procedia PDF Downloads 13479 Screening for Larvicidal Activity of Aqueous and Ethanolic Extracts of Fourteen Selected Plants and Formulation of a Larvicide against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) Larvae
Authors: Michael Russelle S. Alvarez, Noel S. Quiming, Francisco M. Heralde
Abstract:
This study aims to: a) obtain ethanolic (95% EtOH) and aqueous extracts of Selaginella elmeri, Christella dentata, Elatostema sinnatum, Curculigo capitulata, Euphorbia hirta, Murraya koenigii, Alpinia speciosa, Cymbopogon citratus, Eucalyptus globulus, Jatropha curcas, Psidium guajava, Gliricidia sepium, Ixora coccinea and Capsicum frutescens and screen them for larvicidal activities against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) larvae; b) to fractionate the most active extract and determine the most active fraction; c) to determine the larvicidal properties of the most active extract and fraction against by computing their percentage mortality, LC50, and LC90 after 24 and 48 hours of exposure; and d) to determine the nature of the components of the active extracts and fractions using phytochemical screening. Ethanolic (95% EtOH) and aqueous extracts of the selected plants will be screened for potential larvicidal activity against Ae. aegypti and Ae. albopictus using standard procedures and 1% malathion and a Piper nigrum based ovicide-larvicide by the Department of Science and Technology as positive controls. The results were analyzed using One-Way ANOVA with Tukey’s and Dunnett’s test. The most active extract will be subjected to partial fractionation using normal-phase column chromatography, and the fractions subsequently screened to determine the most active fraction. The most active extract and fraction were subjected to dose-response assay and probit analysis to determine the LC50 and LC90 after 24 and 48 hours of exposure. The active extracts and fractions will be screened for phytochemical content. The ethanolic extracts of C. citratus, E. hirta, I. coccinea, G. sepium, M. koenigii, E globulus, J. curcas and C. frutescens exhibited significant larvicidal activity, with C. frutescens being the most active. After fractionation, the ethyl acetate fraction was found to be the most active. Phytochemical screening of the extracts revealed the presence of alkaloids, tannins, indoles and steroids. A formulation using talcum powder–300 mg fraction per 1 g talcum powder–was made and again tested for larvicidal activity. At 2 g/L, the formulation proved effective in killing all of the test larvae after 24 hours.Keywords: larvicidal activity screening, partial purification, dose-response assay, capsicum frutescens
Procedia PDF Downloads 32978 Monitoring Potential Temblor Localities as a Supplemental Risk Control System
Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin
Abstract:
Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.Keywords: risk, earthquake, monitoring, forecast, precursor
Procedia PDF Downloads 2477 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia
Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew
Abstract:
Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia
Procedia PDF Downloads 7376 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 44375 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 16274 Structure Clustering for Milestoning Applications of Complex Conformational Transitions
Authors: Amani Tahat, Serdal Kirmizialtin
Abstract:
Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.Keywords: milestoning, self organizing map, single linkage, structure clustering
Procedia PDF Downloads 22473 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS
Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert
Abstract:
The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF
Procedia PDF Downloads 16372 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 5771 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 10970 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production
Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne
Abstract:
— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling
Procedia PDF Downloads 19069 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 29168 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 6067 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning
Authors: Shayan Mohajer Hamidi
Abstract:
Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning
Procedia PDF Downloads 7666 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning
Authors: James Gallagher, Phillip Benachour
Abstract:
As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.Keywords: context aware, location aware, mobile learning, remote viewing
Procedia PDF Downloads 29165 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 27564 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 9563 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13862 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment
Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali
Abstract:
This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis
Procedia PDF Downloads 42861 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 14160 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017
Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey
Abstract:
The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART
Procedia PDF Downloads 211