Search results for: mathematical mnemonic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1759

Search results for: mathematical mnemonic

139 Enhancement Effect of Superparamagnetic Iron Oxide Nanoparticle-Based MRI Contrast Agent at Different Concentrations and Magnetic Field Strengths

Authors: Bimali Sanjeevani Weerakoon, Toshiaki Osuga, Takehisa Konishi

Abstract:

Magnetic Resonance Imaging Contrast Agents (MRI-CM) are significant in the clinical and biological imaging as they have the ability to alter the normal tissue contrast, thereby affecting the signal intensity to enhance the visibility and detectability of images. Superparamagnetic Iron Oxide (SPIO) nanoparticles, coated with dextran or carboxydextran are currently available for clinical MR imaging of the liver. Most SPIO contrast agents are T2 shortening agents and Resovist (Ferucarbotran) is one of a clinically tested, organ-specific, SPIO agent which has a low molecular carboxydextran coating. The enhancement effect of Resovist depends on its relaxivity which in turn depends on factors like magnetic field strength, concentrations, nanoparticle properties, pH and temperature. Therefore, this study was conducted to investigate the impact of field strength and different contrast concentrations on enhancement effects of Resovist. The study explored the MRI signal intensity of Resovist in the physiological range of plasma from T2-weighted spin echo sequence at three magnetic field strengths: 0.47 T (r1=15, r2=101), 1.5 T (r1=7.4, r2=95), and 3 T (r1=3.3, r2=160) and the range of contrast concentrations by a mathematical simulation. Relaxivities of r1 and r2 (L mmol-1 Sec-1) were obtained from a previous study and the selected concentrations were 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, and 3.0 mmol/L. T2-weighted images were simulated using TR/TE ratio as 2000 ms /100 ms. According to the reference literature, with increasing magnetic field strengths, the r1 relaxivity tends to decrease while the r2 did not show any systematic relationship with the selected field strengths. In parallel, this study results revealed that the signal intensity of Resovist at lower concentrations tends to increase than the higher concentrations. The highest reported signal intensity was observed in the low field strength of 0.47 T. The maximum signal intensities for 0.47 T, 1.5 T and 3 T were found at the concentration levels of 0.05, 0.06 and 0.05 mmol/L, respectively. Furthermore, it was revealed that, the concentrations higher than the above, the signal intensity was decreased exponentially. An inverse relationship can be found between the field strength and T2 relaxation time, whereas, the field strength was increased, T2 relaxation time was decreased accordingly. However, resulted T2 relaxation time was not significantly different between 0.47 T and 1.5 T in this study. Moreover, a linear correlation of transverse relaxation rates (1/T2, s–1) with the concentrations of Resovist can be observed. According to these results, it can conclude that the concentration of SPIO nanoparticle contrast agents and the field strengths of MRI are two important parameters which can affect the signal intensity of T2-weighted SE sequence. Therefore, when MR imaging those two parameters should be considered prudently.

Keywords: Concentration, resovist, field strength, relaxivity, signal intensity

Procedia PDF Downloads 352
138 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 120
137 Groundwater Numerical Modeling, an Application of Remote Sensing, and GIS Techniques in South Darb El Arbaieen, Western Desert, Egypt

Authors: Abdallah M. Fayed

Abstract:

The study area is located in south Darb El Arbaieen, western desert of Egypt. It occupies the area between latitudes 22° 00/ and 22° 30/ North and Longitudes 29° 30/ and 30° 00/ East, from southern border of Egypt to the area north Bir Kuraiym and from the area East of East Owienat to the area west Tushka district, its area about 2750 Km2. The famous features; southern part of Darb El Arbaieen road, G Baraqat El Scab El Qarra, Bir Dibis, Bir El Shab and Bir Kuraiym, Interpretation of soil stratification shows layers that are related to Quaternary and Upper-Lower Cretaceous eras. It is dissected by a series of NE-SW striking faults. The regional groundwater flow direction is in SW-NE direction with a hydraulic gradient is 1m / 2km. Mathematical model program has been applied for evaluation of groundwater potentials in the main Aquifer –Nubian Sandstone- in the area of study and Remote sensing technique is considered powerful, accurate and saving time in this respect. These techniques are widely used for illustrating and analysis different phenomenon such as the new development in the desert (land reclamation), residential development (new communities), urbanization, etc. The major issues concerning water development objective of this work is to determine the new development areas in western desert of Egypt during the period from 2003 to 2015 using remote sensing technique, the impacts of the present and future development have been evaluated by using the two-dimensional numerical groundwater flow Simulation Package (visual modflow 4.2). The package was used to construct and calibrate a numerical model that can be used to simulate the response of the aquifer in the study area under implementing different management alternatives in the form of changes in piezometric levels and salinity. Total period of simulation is 100 years. After steady state calibration, two different scenarios are simulated for groundwater development. 21 production wells are installed at the study area and used in the model, with the total discharge for the two scenarios were 105000 m3/d, 210000 m3/d. The drawdown was 11.8 m and 23.7 m for the two scenarios in the end of 100 year. Contour maps for water heads and drawdown and hydrographs for piezometric head are represented. The drawdown was less than the half of the saturated thickness (the safe yield case).

Keywords: remote sensing, management of aquifer systems, simulation modeling, western desert, South Darb El Arbaieen

Procedia PDF Downloads 401
136 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis

Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante

Abstract:

The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.

Keywords: dynamic analysis, long short-term memory, prediction, sepsis

Procedia PDF Downloads 125
135 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates

Authors: Takashi Mitsuishi

Abstract:

Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).

Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation

Procedia PDF Downloads 363
134 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 145
133 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 341
132 Modeling of the Fermentation Process of Enzymatically Extracted Annona muricata L. Juice

Authors: Calister Wingang Makebe, Wilson Agwanande Ambindei, Zangue Steve Carly Desobgo, Abraham Billu, Emmanuel Jong Nso, P. Nisha

Abstract:

Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1, as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.

Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics

Procedia PDF Downloads 66
131 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 548
130 Anti-Gravity to Neo-Concretism: The Epodic Spaces of Non-Objective Art

Authors: Alexandra Kennedy

Abstract:

Making use of the notion of ‘epodic spaces’ this paper presents a reconsideration of non-objective art practices, proposing alternatives to established materialist, formalist, process-based conceptualist approaches to such work. In his Neo-Concrete Manifesto (1959) Ferreira Gullar (1930-2016) sought to create a distinction between various forms of non-objective art. He distinguished the ‘geometric’ arts of neoplasticism, constructivism, and suprematism – which he described as ‘dangerously acute rationalism’ – from other non-objective practices. These alternatives, he proposed, have an expressive potential lacking in the former and this formed the basis for their categorisation as neo-concrete. Gullar prioritized the phenomenological over the rational, with an emphasis on the role of the spectator (a key concept of minimalism). Gullar highlighted the central role of sensual experience, colour and the poetic in such work. In the early twentieth century, Russian Cosmism – an esoteric philosophical movement – was highly influential on Russian avant-garde artists and can account for suprematist artists’ interest in, and approach to, planar geometry and four-dimensional space as demonstrated in the abstract paintings of Kasimir Malevich (1879-1935). Nikolai Fyodorov (1823-1903) promoted the idea of anti-gravity and cosmic space as the field for artistic activity. The artist and writer Kuzma Petrov-Vodkin (1878-1939) wrote on the concept of Euclidean space, the overcoming of such rational conceptions of space and the breaking free from the gravitational field and the earth’s sphere. These imaginary spaces, which also invoke a bodily experience, present a poetic dimension to the work of the suprematists. It is a dimension that arguably aligns more with Gullar’s formulation of his neo-concrete rather than that of his alignment of Suprematism with rationalism. While found in experiments with planar geometry, the interest in forms suggestive of an experience of breaking free–both physically from the earth and conceptually from rational, mathematical space (in a pre-occupation with non-Euclidean space and anti-geometry) and in their engagement with the spatial properties of colour, Suprematism presents itself as imaginatively epodic. The paper discusses both historical and contemporary non-objective practices in this context, drawing attention to the manner in which the category of the non-objective is used to categorise art works which are, arguably, qualitatively different.

Keywords: anti-gravity, neo-concrete, non-Euclidian geometry, non-objective painting

Procedia PDF Downloads 177
129 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 176
128 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 89
127 Creating Futures: Using Fictive Scripting Methods for Institutional Strategic Planning

Authors: Christine Winberg, James Garraway

Abstract:

Many key university documents, such as vision and mission statements and strategic plans, are aspirational and future-oriented. There is a wide range of future-oriented methods that are used in planning applications, ranging from mathematical modelling to expert opinions. Many of these methods have limitations, and planners using these tools might, for example, make the technical-rational assumption that their plans will unfold in a logical and inevitable fashion, thus underestimating the many complex forces that are at play in planning for an unknown future. This is the issue that this study addresses. The overall project aim was to assist a new university of technology in developing appropriate responses to its social responsibility, graduate employability and research missions in its strategic plan. The specific research question guiding the research activities and approach was: how might the use of innovative future-oriented planning tools enable or constrain a strategic planning process? The research objective was to engage collaborating groups in the use of an innovative tool to develop and assess future scenarios, for the purpose of developing deeper understandings of possible futures and their challenges. The scenario planning tool chosen was ‘fictive scripting’, an analytical technique derived from Technology Forecasting and Innovation Studies. Fictive scripts are future projections that also take into account the present shape of the world and current developments. The process thus began with a critical diagnosis of the present, highlighting its tensions and frictions. The collaborative groups then developed fictive scripts, each group producing a future scenario that foregrounded different institutional missions, their implications and possible consequences. The scripts were analyzed with a view to identifying their potential contribution to the university’s strategic planning exercise. The unfolding fictive scripts revealed a number of insights in terms of unexpected benefits, unexpected challenges, and unexpected consequences. These insights were not evident in previous strategic planning exercises. The contribution that this study offers is to show how better choices can be made and potential pitfalls avoided through a systematic foresight exercise. When universities develop strategic planning documents, they are looking into the future. In this paper it is argued that the use of appropriate tools for future-oriented exercises, can help planners to understand more fully what achieving desired outcomes might entail, what challenges might be encountered, and what unexpected consequences might ensue.

Keywords: fictive scripts, scenarios, strategic planning, technological forecasting

Procedia PDF Downloads 121
126 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 169
125 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 150
124 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 216
123 Magnetic Navigation in Underwater Networks

Authors: Kumar Divyendra

Abstract:

Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.

Keywords: clustering, deep learning, network backbone, parallel computing

Procedia PDF Downloads 98
122 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 62
121 Investigating the Impacts on Cyclist Casualty Severity at Roundabouts: A UK Case Study

Authors: Nurten Akgun, Dilum Dissanayake, Neil Thorpe, Margaret C. Bell

Abstract:

Cycling has gained a great attention with comparable speeds, low cost, health benefits and reducing the impact on the environment. The main challenge associated with cycling is the provision of safety for the people choosing to cycle as their main means of transport. From the road safety point of view, cyclists are considered as vulnerable road users because they are at higher risk of serious casualty in the urban network but more specifically at roundabouts. This research addresses the development of an enhanced mathematical model by including a broad spectrum of casualty related variables. These variables were geometric design measures (approach number of lanes and entry path radius), speed limit, meteorological condition variables (light, weather, road surface) and socio-demographic characteristics (age and gender), as well as contributory factors. Contributory factors included driver’s behavior related variables such as failed to look properly, sudden braking, a vehicle passing too close to a cyclist, junction overshot, failed to judge other person’s path, restart moving off at the junction, poor turn or manoeuvre and disobeyed give-way. Tyne and Wear in the UK were selected as a case study area. The cyclist casualty data was obtained from UK STATS19 National dataset. The reference categories for the regression model were set to slight and serious cyclist casualties. Therefore, binary logistic regression was applied. Binary logistic regression analysis showed that approach number of lanes was statistically significant at the 95% level of confidence. A higher number of approach lanes increased the probability of severity of cyclist casualty occurrence. In addition, sudden braking statistically significantly increased the cyclist casualty severity at the 95% level of confidence. The result concluded that cyclist casualty severity was highly related to approach a number of lanes and sudden braking. Further research should be carried out an in-depth analysis to explore connectivity of sudden braking and approach number of lanes in order to investigate the driver’s behavior at approach locations. The output of this research will inform investment in measure to improve the safety of cyclists at roundabouts.

Keywords: binary logistic regression, casualty severity, cyclist safety, roundabout

Procedia PDF Downloads 175
120 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration

Authors: Cleyton S. Stampa

Abstract:

This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.

Keywords: flat slab, heat storing, pure metal, solid subcooling

Procedia PDF Downloads 141
119 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 165
118 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience

Authors: Joel Thomas, Alexa Squirini

Abstract:

The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'

Keywords: disaster, interoperability, measurement, resilience

Procedia PDF Downloads 143
117 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
116 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes

Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen

Abstract:

Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.

Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades

Procedia PDF Downloads 164
115 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 229
114 Research on the Optimization of Satellite Mission Scheduling

Authors: Pin-Ling Yin, Dung-Ying Lin

Abstract:

Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.

Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling

Procedia PDF Downloads 25
113 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk

Authors: Michael Mihalicz, Aziz Guergachi

Abstract:

Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.

Keywords: decision making, emotions, prospect theory, visceral factors

Procedia PDF Downloads 149
112 An Initiative for Improving Pre-Service Teachers’ Pedagogical Content Knowledge in Mathematics

Authors: Taik Kim

Abstract:

Mathematics anxiety has an important consequence for teacher practices that influence students’ attitudes and achievement. Elementary prospective teachers have the highest levels of mathematics anxiety in comparison with other college majors. In his teaching practice, the researcher developed a highly successful teaching model to reduce pre-service teachers’ higher math anxiety and simultaneously to improve their pedagogical math content knowledge. There were eighty one participants from 2015 to 2018 who took the Mathematics for Elementary Teachers I and II. As the analysis data indicated, elementary prospective teachers’ math anxiety was greatly reduced with improving their math pedagogical knowledge. U.S encounters a critical shortage of well qualified educators. To solve the issue, it is essential to engage students in a long-term commitmentto shape better teachers, who will, in turn, produce k-12 school students that are better-prepared for college students. It is imperative that new instructional strategies are implemented to improve student learning and address declining interest, poor preparedness, a lack of diverse representation, and low persistence of students in mathematics. Many four year college students take math courses from the math department in the College of Arts& Science and then take methodology courses from the College of Education. Before taking pedagogy, many students struggle in learning mathematics and lose their confidence. Since the content course focus on college level math, instead of pre service teachers’ teaching area, per se elementary math, they do not have a chance to improve their teaching skills on topics which eventually they teach. The research, a joint appointment of math and math education, has been involved in teaching content and pedagogy. As the result indicated, participants were able to math content at the same time how to teach. In conclusion, the new initiative to use several teaching strategies was able not only to increase elementary prospective teachers’ mathematical skills and knowledge but also to improve their attitude toward mathematics. We need an innovative teaching strategy which implements evidence-based tactics in redesigning a education and math to improve pre service teachers’math skills and which can improve students’ attitude toward math and students’ logical and reasoning skills. Implementation of these best practices in the local school district is particularly important because K-8 teachers are not generally familiar with lab-based instruction. At the same time, local school teachers will learn a new way how to teach math. This study can be a vital teacher education model expanding throughout the State and nationwide. In summary, this study yields invaluable information how to improve teacher education in the elementary level and, eventually, how to enhance K-8 students’ math achievement.

Keywords: quality of education and improvement method, teacher education, innovative teaching and learning methodologies, math education

Procedia PDF Downloads 104
111 The Phenomenon of the Seawater Intrusion with Fresh Groundwater in the Arab Region

Authors: Kassem Natouf, Ihab Jnad

Abstract:

In coastal aquifers, the interface between fresh groundwater and salty seawater may shift inland, reaching coastal wells and causing an increase in the salinity of the water they pump, putting them out of service. Many Arab coastal sites suffer from this phenomenon due to the increased pumping of coastal groundwater. This research aims to prepare a comprehensive study describing the common characteristics of the phenomenon of seawater intrusion with coastal freshwater aquifers in the Arab region, its general and specific causes and negative effects, in a way that contributes to overcoming this phenomenon, and to exchanging expertise between Arab countries in studying and analyzing it, leading to overcoming it. This research also aims to build geographical and relational databases for data, information and studies available in Arab countries about seawater intrusion with freshwater so as to provide the data and information necessary for managing groundwater resources on Arab coasts, including studying the effects of climate change on these resources and helping decision-makers in developing executive programs to overcome the seawater intrusion with groundwater. The research relied on the methodology of analysis and comparison, where the available information and data about the phenomenon in the Arab region were collected. After that, the information and data collected were studied and analyzed, and the causes of the phenomenon in each case, its results, and solutions for prevention were stated. Finally, the different cases were compared, and the common causes, results, and methods of treatment between them were deduced, and a technical report summarizing that was prepared. To overcome the phenomenon of seawater intrusion with fresh groundwater: (1) It is necessary to develop efforts to monitor the quantity and quality of groundwater on the coasts and to develop mathematical models to predict the impact of climate change, sea level rise, and human activities on coastal groundwater. (2) Over-pumping of coastal aquifers is an important cause of seawater intrusion. To mitigate this problem, Arab countries should reduce groundwater pumping and promote rainwater harvesting, surface irrigation, and water recycling practices. (3) Artificial recharge of coastal groundwater with various forms of water, whether fresh or treated, is a promising technology to mitigate the effects of seawater intrusion.

Keywords: coastal aquifers, seawater intrusion, fresh groundwater, salinity increase, Arab region, groundwater management, climate change effects, sustainable water practices, over-pumping, artificial recharge, monitoring and modeling, data databases, groundwater resources, negative effects, comparative analysis, technical report, water scarcity, groundwater quality, decision-making, environmental impact, agricultural practices

Procedia PDF Downloads 34
110 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 169