Search results for: computational error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3700

Search results for: computational error

370 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 76
369 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 216
368 Aero-Hydrodynamic Model for a Floating Offshore Wind Turbine

Authors: Beatrice Fenu, Francesco Niosi, Giovanni Bracco, Giuliana Mattiazzo

Abstract:

In recent years, Europe has seen a great development of renewable energy, in a perspective of reducing polluting emissions and transitioning to cleaner forms of energy, as established by the European Green New Deal. Wind energy has come to cover almost 15% of European electricity needs andis constantly growing. In particular, far-offshore wind turbines are attractive from the point of view of exploiting high-speed winds and high wind availability. Considering offshore wind turbine siting that combines the resources analysis, the bathymetry, environmental regulations, and maritime traffic and considering the waves influence in the stability of the platform, the hydrodynamic characteristics of the platform become fundamental for the evaluation of the performances of the turbine, especially for the pitch motion. Many platform's geometries have been studied and used in the last few years. Their concept is based upon different considerations as hydrostatic stability, material, cost and mooring system. A new method to reach a high-performances substructure for different kinds of wind turbines is proposed. The system that considers substructure, mooring, and wind turbine is implemented in Orcaflex, and the simulations are performed considering several sea states and wind speeds. An external dynamic library is implemented for the turbine control system. The study shows the comparison among different substructures and the new concepts developed. In order to validate the model, CFD simulations will be performed by mean of STAR CCM+, and a comparison between rigid and elastic body for what concerns blades and tower will be carried out. A global model will be built to predict the productivity of the floating turbine according to siting, resources, substructure, and mooring. The Levelized Cost of Electricity (LCOE) of the system is estimated, giving a complete overview about the advantages of floating offshore wind turbine plants. Different case studies will be presented.

Keywords: aero-hydrodynamic model, computational fluid dynamics, floating offshore wind, siting, verification, and validation

Procedia PDF Downloads 185
367 Designed Purine Molecules and in-silico Evaluation of Aurora Kinase Inhibition in Breast Cancer

Authors: Pooja Kumari, Anandkumar Tengli

Abstract:

Aurora kinase enzyme, a protein on overexpression, leads to metastasis and is extremely important for women’s health in terms of prevention or treatment. While creating a targeted technique, the aim of the work is to design purine molecules that inhibit in aurora kinase enzyme and helps to suppress breast cancer. Purine molecules attached to an amino acid in DNA block protein synthesis or halt the replication and metastasis caused by the aurora kinase enzyme. Various protein related to the overexpression of aurora protein was docked with purine molecule using Biovia Drug Discovery, the perpetual software. Various parameters like X-ray crystallographic structure, presence of ligand, Ramachandran plot, resolution, etc., were taken into consideration for selecting the target protein. A higher negative binding scored molecule has been taken for simulation studies. According to the available research and computational analyses, purine compounds may be powerful enough to demonstrate a greater affinity for the aurora target. Despite being clinically effective now, purines were originally meant to fight breast cancer by inhibiting the aurora kinase enzyme. In in-silico studies, it is observed that purine compounds have a moderate to high potency compared to other molecules, and our research into the literature revealed that purine molecules have a lower risk of side effects. The research involves the design, synthesis, and identification of active purine molecules against breast cancer. Purines are structurally similar to the normal metabolites of adenine and guanine; hence interfere/compete with protein synthesis and suppress the abnormal proliferation of cells/tissues. As a result, purine target metastasis cells and stop the growth of kinase; purine derivatives bind with DNA and aurora protein which may stop the growth of protein or inhibits replication and stop metastasis of overexpressed aurora kinase enzyme.

Keywords: aurora kinases, in silico studies, medicinal chemistry, combination therapies, chronic cancer, clinical translation

Procedia PDF Downloads 65
366 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 90
365 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 305
364 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 131
363 Efficiency Validation of Hybrid Geothermal and Radiant Cooling System Implementation in Hot and Humid Climate Houses of Saudi Arabia

Authors: Jamil Hijazi, Stirling Howieson

Abstract:

Over one-quarter of the Kingdom of Saudi Arabia’s total oil production (2.8 million barrels a day) is used for electricity generation. The built environment is estimated to consume 77% of the total energy production. Of this amount, air conditioning systems consume about 80%. Apart from considerations surrounding global warming and CO2 production it has to be recognised that oil is a finite resource and the KSA like many other oil rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground cooling pipes in combination with black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing carbon emissions while providing all year round thermal comfort in a typical Saudi Arabian urban housing block. At the outset air and soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (Design Builder) that utilised the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/ stack ventilation and radiant cooling pipes embed in floor).Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.

Keywords: energy efficiency, ground pipe, hybrid cooling, radiative cooling, thermal comfort

Procedia PDF Downloads 239
362 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 247
361 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography

Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko

Abstract:

The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.

Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system

Procedia PDF Downloads 72
360 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 156
359 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 83
358 Experimental and Theoretical Characterization of Supramolecular Complexes between 7-(Diethylamino)Quinoline-2(1H)-One and Cucurbit[7] Uril

Authors: Kevin A. Droguett, Edwin G. Pérez, Denis Fuentealba, Margarita E. Aliaga, Angélica M. Fierro

Abstract:

Supramolecular chemistry is a field of growing interest. Moreover, studying the formation of host-guest complexes between macrocycles and dyes is highly attractive due to their potential applications. Examples of the above are drug delivery, catalytic process, and sensing, among others. There are different dyes of interest in the literature; one example is the quinolinone derivatives. Those molecules have good optical properties and chemical and thermal stability, making them suitable for developing fluorescent probes. Secondly, several macrocycles can be seen in the literature. One example is the cucurbiturils. This water-soluble macromolecule family has a hydrophobic cavity and two identical carbonyl portals. Additionally, the thermodynamic analysis of those supramolecular systems could help understand the affinity between the host and guest, their interaction, and the main stabilization energy of the complex. In this work, two 7-(diethylamino) quinoline-2 (1H)-one derivative (QD1-2) and their interaction with cucurbit[7]uril (CB[7]) were studied from an experimental and in-silico point of view. For the experimental section, the complexes showed a 1:1 stoichiometry by HRMS-ESI and isothermal titration calorimetry (ITC). The inclusion of the derivatives on the macrocycle lends to an upward shift in the fluorescence intensity, and the pKa value of QD1-2 exhibits almost no variation after the formation of the complex. The thermodynamics of the inclusion complexes was investigated using ITC; the results demonstrate a non-classical hydrophobic effect with a minimum contribution from the entropy term and a constant binding on the order of 106 for both ligands. Additionally, dynamic molecular studies were carried out during 300 ns in an explicit solvent at NTP conditions. Our finding shows that the complex remains stable during the simulation (RMSD ~1 Å), and hydrogen bonds contribute to the stabilization of the systems. Finally, thermodynamic parameters from MMPBSA calculations were obtained to generate new computational insights to compare with experimental results.

Keywords: host-guest complexes, molecular dynamics, quinolin-2(1H)-one derivatives dyes, thermodynamics

Procedia PDF Downloads 63
357 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 244
356 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 100
355 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work

Authors: Anna-Lisa Osvalder, Jonas Borell

Abstract:

There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.

Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability

Procedia PDF Downloads 59
354 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 167
353 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 328
352 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.

Keywords: mathematical sciences, data analytics, advances, unveiling

Procedia PDF Downloads 62
351 Interactive Glare Visualization Model for an Architectural Space

Authors: Florina Dutt, Subhajit Das, Matthew Swartz

Abstract:

Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.

Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis

Procedia PDF Downloads 326
350 Isolation and Characterization of the First Known Inhibitor Cystine Knot Peptide in Sea Anemone: Inhibitory Activity on Acid-Sensing Ion Channels

Authors: Armando A. Rodríguez, Emilio Salceda, Anoland Garateix, André J. Zaharenko, Steve Peigneur, Omar López, Tirso Pons, Michael Richardson, Maylín Díaz, Yasnay Hernández, Ludger Ständker, Jan Tytgat, Enrique Soto

Abstract:

Acid-sensing ion channels are cation (Na+) channels activated by a pH drop. These proteins belong to the ENaC/degenerin superfamily of sodium channels. ASICs are involved in sensory perception, synaptic plasticity, learning, memory formation, cell migration and proliferation, nociception, and neurodegenerative disorders, among other processes; therefore those molecules that specifically target these channels are of growing pharmacological and biomedical interest. Sea anemones produce a large variety of ion channels peptide toxins; however, those acting on ligand-gated ion channels, such as Glu-gated, Ach-gated ion channels, and acid-sensing ion channels (ASICs), remain barely explored. The peptide PhcrTx1 is the first compound characterized from the sea anemone Phymanthus crucifer, and it constitutes a novel ASIC inhibitor. This peptide was purified by chromatographic techniques and pharmacologically characterized on acid-sensing ion channels of mammalian neurons using patch-clamp techniques. PhcrTx1 inhibited ASIC currents with an IC50 of 100 nM. Edman degradation yielded a sequence of 32 amino acids residues, with a molecular mass of 3477 Da by MALDI-TOF. No similarity to known sea anemone peptides was found in protein databases. The computational analysis of Cys-pattern and secondary structure arrangement suggested that this is a structurally ICK (Inhibitor Cystine Knot)-type peptide, a scaffold that had not been found in sea anemones but in other venomous organisms. These results show that PhcrTx1 represents the first member of a new structural group of sea anemones toxins acting on ASICs. Also, this peptide constitutes a novel template for the development of drugs against pathologies related to ASICs function.

Keywords: animal toxin, inhibitor cystine knot, ion channel, sea anemone

Procedia PDF Downloads 275
349 Effect of Rolling Shear Modulus and Geometric Make up on the Out-Of-Plane Bending Performance of Cross-Laminated Timber Panel

Authors: Md Tanvir Rahman, Mahbube Subhani, Mahmud Ashraf, Paul Kremer

Abstract:

Cross-laminated timber (CLT) is made from layers of timber boards orthogonally oriented in the thickness direction, and due to this, CLT can withstand bi-axial bending in contrast with most other engineered wood products such as laminated veneer lumber (LVL) and glued laminated timber (GLT). Wood is cylindrically anisotropic in nature and is characterized by significantly lower elastic modulus and shear modulus in the planes perpendicular to the fibre direction, and is therefore classified as orthotropic material and is thus characterized by 9 elastic constants which are three elastic modulus in longitudinal direction, tangential direction and radial direction, three shear modulus in longitudinal tangential plane, longitudinal radial plane and radial tangential plane and three Poisson’s ratio. For simplification, timber materials are generally assumed to be transversely isotropic, reducing the number of elastic properties characterizing it to 5, where the longitudinal plane and radial planes are assumed to be planes of symmetry. The validity of this assumption was investigated through numerical modelling of CLT with both orthotropic mechanical properties and transversely isotropic material properties for three softwood species, which are Norway spruce, Douglas fir, Radiata pine, and three hardwood species, namely Victorian ash, Beech wood, and Aspen subjected to uniformly distributed loading under simply supported boundary condition. It was concluded that assuming the timber to be transversely isotropic results in a negligible error in the order of 1 percent. It was also observed that along with longitudinal elastic modulus, ratio of longitudinal shear modulus (GL) and rolling shear modulus (GR) has a significant effect on a deflection for CLT panels of lower span to depth ratio. For softwoods such as Norway spruce and Radiata pine, the ratio of longitudinal shear modulus, GL to rolling shear modulus GR is reported to be in the order of 12 to 15 times in literature. This results in shear flexibility in transverse layers leading to increased deflection under out-of-plane loading. The rolling shear modulus of hardwoods has been found to be significantly higher than those of softwoods, where the ratio between longitudinal shear modulus to rolling shear modulus as low as 4. This has resulted in a significant rise in research into the manufacturing of CLT from entirely from hardwood, as well as from a combination of softwood and hardwoods. The commonly used beam theory to analyze the performance of CLT panels under out-of-plane loads are the Shear analogy method, Gamma method, and k-method. The shear analogy method has been found to be the most effective method where shear deformation is significant. The effect of the ratio of longitudinal shear modulus and rolling shear modulus of cross-layer on the deflection of CLT under uniformly distributed load with respect to its length to depth ratio was investigated using shear analogy method. It was observed that shear deflection is reduced significantly as the ratio of the shear modulus of the longitudinal layer and rolling shear modulus of cross-layer decreases. This indicates that there is significant room for improvement of the bending performance of CLT through developing hybrid CLT from a mix of softwood and hardwood.

Keywords: rolling shear modulus, shear deflection, ratio of shear modulus and rolling shear modulus, timber

Procedia PDF Downloads 99
348 Insulin Resistance in Early Postmenopausal Women Can Be Attenuated by Regular Practice of 12 Weeks of Yoga Therapy

Authors: Praveena Sinha

Abstract:

Context: Diabetes is a global public health burden, particularly affecting postmenopausal women. Insulin resistance (IR) is prevalent in this population, and it is associated with an increased risk of developing type 2 diabetes. Yoga therapy is gaining attention as a complementary intervention for diabetes due to its potential to address stress psychophysiology. This study focuses on the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Research Aim: The aim of this research is to investigate the effect of a 3-month long yoga practice on insulin resistance in early postmenopausal women. Methodology: The study conducted a prospective longitudinal design with 67 women within five years of menopause. Participants were divided into two groups based on their willingness to join yoga. The Yoga group (n = 37) received routine gynecological management along with an integrated yoga module, while the Non-Yoga group (n = 30) received only routine management. Insulin resistance was measured using the homeostasis model assessment of insulin resistance (HOMA-IR) method before and after the intervention. Statistical analysis was performed using GraphPad Prism Version 5 software, with statistical significance set at P < 0.05. Findings: The results indicate a significant decrease in serum fasting insulin levels and HOMA-IR measurements in the Yoga group, although the decrease did not reach statistical significance. In contrast, the Non-Yoga group showed a significant rise in serum fasting insulin levels and HOMA-IR measurements after 3 months, suggesting a detrimental effect on insulin resistance in these postmenopausal women. Theoretical Importance: This study provides evidence that a 12-week yoga practice can attenuate the increase in insulin resistance in early postmenopausal women. It highlights the potential of yoga as a preventive measure against the early onset of insulin resistance and the development of type 2 diabetes mellitus. Regular yoga practice can be a valuable tool in addressing hormonal imbalances associated with early postmenopause, leading to a decrease in morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in this population. Data Collection and Analysis Procedures: Data collection involved measuring serum fasting insulin levels and calculating HOMA-IR. Statistical analysis was performed using GraphPad Prism Version 5 software, and mean values with standard error of the mean were reported. The significance level was set at P < 0.05. Question Addressed: The study aimed to address whether a 3-month long yoga practice could attenuate insulin resistance in early postmenopausal women. Conclusion: The research findings support the efficacy of a 12-week yoga practice in attenuating insulin resistance in early postmenopausal women. Regular yoga practice has the potential to prevent the early onset of insulin resistance and the development of type 2 diabetes mellitus in this population. By addressing the hormonal imbalances associated with early post menopause, yoga could significantly decrease morbidity and mortality related to insulin resistance and type 2 diabetes mellitus in these subjects.

Keywords: post menopause, insulin resistance, HOMA-IR, yoga, type 2 diabetes mellitus

Procedia PDF Downloads 40
347 A Mixed-Method Exploration of the Interrelationship between Corporate Governance and Firm Performance

Authors: Chen Xiatong

Abstract:

The study aims to explore the interrelationship between corporate governance factors and firm performance in Mainland China using a mixed-method approach. To clarify the current effectiveness of corporate governance, uncover the complex interrelationships between governance factors and firm performance, and enhance understanding of corporate governance strategies in Mainland China. The research involves quantitative methods like statistical analysis of governance factors and firm performance data, as well as qualitative approaches including policy research, case studies, and interviews with staff members. The study aims to reveal the current effectiveness of corporate governance in Mainland China, identify complex interrelationships between governance factors and firm performance, and provide suggestions for companies to enhance their governance practices. The research contributes to enriching the literature on corporate governance by providing insights into the effectiveness of governance practices in Mainland China and offering suggestions for improvement. Quantitative data will be gathered through surveys and sampling methods, focusing on governance factors and firm performance indicators. Qualitative data will be collected through policy research, case studies, and interviews with staff members. Quantitative data will be analyzed using statistical, mathematical, and computational techniques. Qualitative data will be analyzed through thematic analysis and interpretation of policy documents, case study findings, and interview responses. The study addresses the effectiveness of corporate governance in Mainland China, the interrelationship between governance factors and firm performance, and staff members' perceptions of corporate governance strategies. The research aims to enhance understanding of corporate governance effectiveness, enrich the literature on governance practices, and contribute to the field of business management and human resources management in Mainland China.

Keywords: corporate governance, business management, human resources management, board of directors

Procedia PDF Downloads 29
346 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 49
345 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 87
344 Redefining Doctors' Role in Terms of Medical Errors and Consumer Protection Act to Be in Line with Medical Ethics

Authors: Manushi Srivastava

Abstract:

Introduction: Doctor’s role, and relation with respect to patient care is at the core of medical ethics. The rapid pace of medical advances along with increasing consumer awareness about their rights and hike in cost of effective health care demand a robust, transparent and patient-friendly medical care system. However, doctors’ role performance is still in the frame of activity-passivity model of Doctor-Patient Relationship (DPR) where doctors act as parent and use to instruct their patients, without their consensus that is not going to help in the 21st century. Thus the current situation is a new challenge for traditional doctor-patient relationship after the introduction of Consumer Protection Act (CPA) in medical profession and the same is evidenced by increasing cases of medical litigation. To strengthen this system of medical services, the doctor plays a vital role, and the same should be reviewed in the present context. Objective: To understand the opinion of consultants regarding medical negligence and effect of Consumer Protection Act in terms of current practices of patient care. Method: This is a cross-sectional study in which both quantitative and qualitative methods are applied. Total 69 consultants were selected from multi-specialty hospitals of densely populated Varanasi city catering a population of about 1.8 million. Two-stage sampling was used for selection of respondents. At the first stage, selection of major wards (Medicine, Surgery, Ophthalmology, Gynaecology, Orthopaedics, and Paediatrics) was carried out, which are more susceptible to medical negligence. At the second stage, selection of consultants from the respective wards was carried out. In-depth Interviews were conducted with the help of semi-structured schedule. Two case studies of medical negligence were also carried out as part of the qualitative study. Analysis: Data were analyzed with the help of SPSS software (21.0 trial version). Semi-structured research tool was used to know consultant’s opinion about the pattern of medical negligence cases, litigations and claims made by patient community and inclusion of government medical services in CPA. Statistical analysis was done to describe data, and non-parametric test was used to observe the association between the variables. Analysis of Verbatim was used in case-study. Findings and Conclusion: Majority (92.8%) of consultants felt changes in the behaviour of community (patient) after implementation of CPA, as it had increased awareness about their rights. Less than half of the consultants opined that Medical Negligence is an Unintentional act of doctors and generally occurs due to communication gap and behavioural problem between doctor and patients. Experienced consultants ( > 10 years) pointed out that unethical practice by doctors and mal-intention of patient to harass doctors were additional reasons of Medical Negligence. In-depth interview revealed that now patients’ community expects more transparency and hence they demand cafeteria approach in diagnosis and management of cases. Thus as study results, we propose ‘Agreement Model’ of DPR to re-ensure ethical practice in medical profession.

Keywords: doctors, communication, consumer protection act (CPA), medical error

Procedia PDF Downloads 142
343 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 327
342 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 50
341 The Influence of Mechanical and Physicochemical Characteristics of Perfume Microcapsules on Their Rupture Behaviour and How This Relates to Performance in Consumer Products

Authors: Andrew Gray, Zhibing Zhang

Abstract:

The ability for consumer products to deliver a sustained perfume response can be a key driver for a variety of applications. Many compounds in perfume oils are highly volatile, meaning they readily evaporate once the product is applied, and the longevity of the scent is poor. Perfume capsules have been introduced as a means of abating this evaporation once the product has been delivered. The impermeable capsules are aimed to be stable within the formulation, and remain intact during delivery to the desired substrate, only rupturing to release the core perfume oil through application of mechanical force applied by the consumer. This opens up the possibility of obtaining an olfactive response hours, weeks or even months after delivery, depending on the nature of the desired application. Tailoring the properties of the polymeric capsules to better address the needs of the application is not a trivial challenge and currently design of capsules is largely done by trial and error. The aim of this work is to have more predictive methods for capsule design depending on the consumer application. This means refining formulations such that they rupture at the right time for the specific consumer application, not too early, not too late. Finding the right balance between these extremes is essential if a benefit is sought with respect to neat addition of perfume to formulations. It is important to understand the forces that influence capsule rupture, first, by quantifying the magnitude of these different forces, and then by assessing bulk rupture in real-world applications to understand how capsules actually respond. Samples were provided by an industrial partner and the mechanical properties of individual capsules within the samples were characterized via a micromanipulation technique, developed by Professor Zhang at the University of Birmingham. The capsules were synthesized such as to change one particular physicochemical property at a time, such as core: wall material ratio, and the average size of capsules. Analysis of shell thickness via Transmission Electron Microscopy, size distribution via the use of a Mastersizer, as well as a variety of other techniques confirmed that only one particular physicochemical property was altered for each sample. The mechanical analysis was subsequently undertaken, showing the effect that changing certain capsule properties had on the response under compression. It was, however, important to link this fundamental mechanical response to capsule performance in real-world applications. As such, the capsule samples were introduced to a formulation and exposed to full scale stresses. GC-MS headspace analysis of the perfume oil released from broken capsules enabled quantification of what the relative strengths of capsules truly means for product performance. Correlations have been found between the mechanical strength of capsule samples and performance in terms of perfume release in consumer applications. Having a better understanding of the key parameters that drive performance benefits the design of future formulations by offering better guidelines on the parameters that can be adjusted without worrying about the performance effects, and singles out those parameters that are essential in finding the sweet spot for capsule performance.

Keywords: consumer products, mechanical and physicochemical properties, perfume capsules, rupture behaviour

Procedia PDF Downloads 113