Search results for: signal representation and approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3255

Search results for: signal representation and approximation

165 IL6/PI3K/mTOR/GFAP Molecular Pathway Role in COVID-19-Induced Neurodegenerative Autophagy, Impacts and Relatives

Authors: Mohammadjavad Sotoudeheian

Abstract:

COVID-19, which began in December 2019, uses the angiotensin-converting enzyme 2 (ACE2) receptor to enter and spread through the cells. ACE2 mRNA is present in almost every organ, including nasopharynx, lung, as well as the brain. Ports of entry of SARS-CoV-2 into the central nervous system (CNS) may include arterial circulation, while viremia is remarkable. However, it is imperious to develop neurological symptoms evaluation CSF analysis in patients with COVID-19, but theoretically, ACE2 receptors are expressed in cerebellar cells and may be a target for SARS-CoV-2 infection in the brain. Recent evidence agrees that SARS-CoV-2 can impact the brain through direct and indirect injury. Two biomarkers for CNS injury, glial fibrillary acidic protein (GFAP) and neurofilament light chain (NFL) detected in the plasma of patients with COVID-19. NFL, an axonal protein expressed in neurons, is related to axonal neurodegeneration, and GFAP is over-expressed in CNS inflammation. GFAP cytoplasmic accumulation causes Schwan cells to misfunction, so affects myelin generation, reduces neuroskeletal support over NfLs during CNS inflammation, and leads to axonal degeneration. Interleukin-6 (IL-6), which extensively over-express due to interleukin storm during COVID-19 inflammation, regulates gene expression, as well as GFAP through STAT molecular pathway. IL-6 also impresses the phosphoinositide 3-kinase (PI3K)/STAT/smads pathway. The PI3K/ protein kinase B (Akt) pathway is the main modulator upstream of the mammalian target of rapamycin (mTOR), and alterations in this pathway are common in neurodegenerative diseases. Most neurodegenerative diseases show a disruption of autophagic function and display an abnormal increase in protein aggregation that promotes cellular death. Therefore, induction of autophagy has been recommended as a rational approach to help neurons clear abnormal protein aggregates and survive. The mTOR is a major regulator of the autophagic process and is regulated by cellular stressors. The mTORC1 pathway and mTORC2, as complementary and important elements in mTORC1 signaling, have become relevant in the regulation of the autophagic process and cellular survival through the extracellular signal-regulated kinase (ERK) pathway.

Keywords: mTORC1, COVID-19, PI3K, autophagy, neurodegeneration

Procedia PDF Downloads 63
164 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring

Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon

Abstract:

We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.

Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch

Procedia PDF Downloads 159
163 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 493
162 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 156
161 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables

Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner

Abstract:

High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)

Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line

Procedia PDF Downloads 138
160 Frequency Domain Decomposition, Stochastic Subspace Identification and Continuous Wavelet Transform for Operational Modal Analysis of Three Story Steel Frame

Authors: Ardalan Sabamehr, Ashutosh Bagchi

Abstract:

Recently, Structural Health Monitoring (SHM) based on the vibration of structures has attracted the attention of researchers in different fields such as: civil, aeronautical and mechanical engineering. Operational Modal Analysis (OMA) have been developed to identify modal properties of infrastructure such as bridge, building and so on. Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) are the three most common methods in output only modal identification. FDD, SSI, and CWT operate based on the frequency domain, time domain, and time-frequency plane respectively. So, FDD and SSI are not able to display time and frequency at the same time. By the way, FDD and SSI have some difficulties in a noisy environment and finding the closed modes. CWT technique which is currently developed works on time-frequency plane and a reasonable performance in such condition. The other advantage of wavelet transform rather than other current techniques is that it can be applied for the non-stationary signal as well. The aim of this paper is to compare three most common modal identification techniques to find modal properties (such as natural frequency, mode shape, and damping ratio) of three story steel frame which was built in Concordia University Lab by use of ambient vibration. The frame has made of Galvanized steel with 60 cm length, 27 cm width and 133 cm height with no brace along the long span and short space. Three uniaxial wired accelerations (MicroStarin with 100mv/g accuracy) have been attached to the middle of each floor and gateway receives the data and send to the PC by use of Node Commander Software. The real-time monitoring has been performed for 20 seconds with 512 Hz sampling rate. The test is repeated for 5 times in each direction by hand shaking and impact hammer. CWT is able to detect instantaneous frequency by used of ridge detection method. In this paper, partial derivative ridge detection technique has been applied to the local maxima of time-frequency plane to detect the instantaneous frequency. The extracted result from all three methods have been compared, and it demonstrated that CWT has the better performance in term of its accuracy in noisy environment. The modal parameters such as natural frequency, damping ratio and mode shapes are identified from all three methods.

Keywords: ambient vibration, frequency domain decomposition, stochastic subspace identification, continuous wavelet transform

Procedia PDF Downloads 270
159 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy

Procedia PDF Downloads 314
158 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 84
157 Representational Issues in Learning Solution Chemistry at Secondary School

Authors: Lam Pham, Peter Hubber, Russell Tytler

Abstract:

Students’ conceptual understandings of chemistry concepts/phenomena involve capability to coordinate across the three levels of Johnston’s triangle model. This triplet model is based on reasoning about chemical phenomena across macro, sub-micro and symbolic levels. In chemistry education, there is a need for further examining inquiry-based approaches that enhance students’ conceptual learning and problem solving skills. This research adopted a directed inquiry pedagogy based on students constructing and coordinating representations, to investigate senior school students’ capabilities to flexibly move across Johnston’ levels when learning dilution and molar concentration concepts. The participants comprise 50 grade 11 and 20 grade 10 students and 4 chemistry teachers who were selected from 4 secondary schools located in metropolitan Melbourne, Victoria. This research into classroom practices used ethnographic methodology, involved teachers working collaboratively with the research team to develop representational activities and lesson sequences in the instruction of a unit on solution chemistry. The representational activities included challenges (Representational Challenges-RCs) that used ‘representational tools’ to assist students to move across Johnson’s three levels for dilution phenomena. In this report, the ‘representational tool’ called ‘cross and portion’ model was developed and used in teaching and learning the molar concentration concept. Students’ conceptual understanding and problem solving skills when learning with this model are analysed through group case studies of year 10 and 11 chemistry students. In learning dilution concepts, students in both group case studies actively conducted a practical experiment, used their own language and visualisation skills to represent dilution phenomena at macroscopic level (RC1). At the sub-microscopic level, students generated and negotiated representations of the chemical interactions between solute and solvent underpinning the dilution process. At the symbolic level, students demonstrated their understandings about dilution concepts by drawing chemical structures and performing mathematical calculations. When learning molar concentration with a ‘cross and portion’ model (RC2), students coordinated across visual and symbolic representational forms and Johnson’s levels to construct representations. The analysis showed that in RC1, Year 10 students needed more ‘scaffolding’ in inducing to representations to explicit the form and function of sub-microscopic representations. In RC2, Year 11 students showed clarity in using visual representations (drawings) to link to mathematics to solve representational challenges about molar concentration. In contrast, year 10 students struggled to get match up the two systems, symbolic system of mole per litre (‘cross and portion’) and visual representation (drawing). These conceptual problems do not lie in the students’ mathematical calculation capability but rather in students’ capability to align visual representations with the symbolic mathematical formulations. This research also found that students in both group case studies were able to coordinate representations when probed about the use of ‘cross and portion’ model (in RC2) to demonstrate molar concentration of diluted solutions (in RC1). Students mostly succeeded in constructing ‘cross and portion’ models to represent the reduction of molar concentration of the concentration gradients. In conclusion, this research demonstrated how the strategic introduction and coordination of chemical representations across modes and across the macro, sub-micro and symbolic levels, supported student reasoning and problem solving in chemistry.

Keywords: cross and portion, dilution, Johnston's triangle, molar concentration, representations

Procedia PDF Downloads 110
156 Occupational Exposure and Contamination to Antineoplastic Drugs of Healthcare Professionals in Mauritania

Authors: Antoine Villa, Moustapha Mohamedou, Florence Pilliere, Catherine Verdun-Esquer, Mathieu Molimard, Mohamed Sidatt Cheikh El Moustaph, Mireille Canal-Raffin

Abstract:

Context: In Mauritania, the activity of the National Center of Oncology (NCO) has steadily risen leading to an increase in the handling of antineoplastic drugs (AD) by healthcare professionals. In this context, the AD contamination of those professionals is a major concern for occupational physicians. It has been evaluated using biological monitoring of occupational exposure (BMOE). Methods: The intervention took place in 2015, in 2 care units, and evaluated nurses preparing and/or infusing AD and agents in charge of hygiene. Participants provided a single urine sample, at the end of the week, at the end of their shift. Five molecules were sought using specific high sensitivity methods (UHPLC-MS/MS) with very low limits of quantification (LOQ) (cyclophosphamide (CP), Ifosfamide (IF), methotrexate (MTX): 2.5ng/L; doxorubicin (Doxo): 10ng/L; α-fluoro-β-alanine (FBAL, 5-FU metabolite): 20ng/L). A healthcare worker was considered as 'contaminated' when an AD was detected at a urine concentration equal to or greater than the LOQ of the analytical method or at trace concentration. Results: Twelve persons participated (6 nurses, 6 agents in charge of hygiene). Twelve urine samples were collected and analyzed. The percentage of contamination was 66.6% for all participants (n=8/12), 100% for nurses (6/6) and 33% for agents in charge of hygiene (2/6). In 62.5% (n=5/8) of the contaminated workers, two to four of the AD were detected in the urine. CP was found in the urine of all contaminated workers. FBAL was found in four, MTX in three and Doxo in one. Only IF was not detected. Urinary concentrations (all drugs combined) ranged from 3 to 844 ng/L for nurses and from 3 to 44 ng/L for agents in charge of hygiene. The median urinary concentrations were 87 ng/L, 15.1 ng/L and 4.4 ng/L for FBAL, CP and MTX, respectively. The Doxo urinary concentration was found 218ng/L. Discussion: There is no current biological exposure index for the interpretation of AD contamination. The contamination of these healthcare professionals is therefore established by the detection of one or more AD in urine. These urinary contaminations are higher than the LOQ of the analytical methods, which must be as low as possible. Given the danger of AD, the implementation of corrective measures is essential for the staff. Biological monitoring of occupational exposure is the most reliable process to identify groups at risk, tracing insufficiently controlled exposures and as an alarm signal. These results show the necessity to educate professionals about the risks of handling AD and/or to care for treated patients.

Keywords: antineoplastic drugs, Mauritania, biological monitoring of occupational exposure, contamination

Procedia PDF Downloads 283
155 Individual Cylinder Ignition Advance Control Algorithms of the Aircraft Piston Engine

Authors: G. Barański, P. Kacejko, M. Wendeker

Abstract:

The impact of the ignition advance control algorithms of the ASz-62IR-16X aircraft piston engine on a combustion process has been presented in this paper. This aircraft engine is a nine-cylinder 1000 hp engine with a special electronic control ignition system. This engine has two spark plugs per cylinder with an ignition advance angle dependent on load and the rotational speed of the crankshaft. Accordingly, in most cases, these angles are not optimal for power generated. The scope of this paper is focused on developing algorithms to control the ignition advance angle in an electronic ignition control system of an engine. For this type of engine, i.e. radial engine, an ignition advance angle should be controlled independently for each cylinder because of the design of such an engine and its crankshaft system. The ignition advance angle is controlled in an open-loop way, which means that the control signal (i.e. ignition advance angle) is determined according to the previously developed maps, i.e. recorded tables of the correlation between the ignition advance angle and engine speed and load. Load can be measured by engine crankshaft speed or intake manifold pressure. Due to a limited memory of a controller, the impact of other independent variables (such as cylinder head temperature or knock) on the ignition advance angle is given as a series of one-dimensional arrays known as corrective characteristics. The value of the ignition advance angle specified combines the value calculated from the primary characteristics and several correction factors calculated from correction characteristics. Individual cylinder control can proceed in line with certain indicators determined from pressure registered in a combustion chamber. Control is assumed to be based on the following indicators: maximum pressure, maximum pressure angle, indicated mean effective pressure. Additionally, a knocking combustion indicator was defined. Individual control can be applied to a single set of spark plugs only, which results from two fundamental ideas behind designing a control system. Independent operation of two ignition control systems – if two control systems operate simultaneously. It is assumed that the entire individual control should be performed for a front spark plug only and a rear spark plug shall be controlled with a fixed (or specific) offset relative to the front one or from a reference map. The developed algorithms will be verified by simulation and engine test sand experiments. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: algorithm, combustion process, radial engine, spark plug

Procedia PDF Downloads 270
154 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 113
153 MOVIDA.polis: Physical Activity mHealth Based Platform

Authors: Rui Fonseca-Pinto, Emanuel Silva, Rui Rijo, Ricardo Martinho, Bruno Carreira

Abstract:

The sedentary lifestyle is associated to the development of chronic noncommunicable diseases (obesity, hypertension, Diabetes Mellitus Type 2) and the World Health Organization, given the evidence that physical activity is determinant for individual and collective health, defined the Physical Activity Level (PAL) as a vital signal. Strategies for increasing the practice of physical activity in all age groups have emerged from the various social organizations (municipalities, universities, health organizations, companies, social groups) by increasingly developing innovative strategies to promote motivation strategies and conditions to the practice of physical activity. The adaptation of cities to the new paradigms of sustainable mobility has provided the adaptation of urban training circles and mobilized citizens to combat sedentarism. This adaptation has accompanied the technological evolution and makes possible the use of mobile technology to monitor outdoor training programs and also, through the network connection (IoT), use the training data to make personalized recommendations. This work presents a physical activity counseling platform to be used in the physical maintenance circuits of urban centers, the MOVIDA.polis. The platform consists of a back office for the management of circuits and training stations, and for a mobile application for monitoring the user performance during workouts. Using a QRcode, each training station is recognized by the App and based on the individual performance records (effort perception, heart rate variation) artificial intelligence algorithms are used to make a new personalized recommendation. The results presented in this work were obtained during the proof of concept phase, which was carried out in the PolisLeiria training circuit in the city of Leiria (Portugal). It was possible to verify the increase in adherence to the practice of physical activity, as well as to decrease the interval between training days. Moreover, the AI-based recommendation acts as a partner in the training and an additional challenging factor. The platform is ready to be used by other municipalities in order to reduce the levels of sedentarism and approach the weekly goal of 150 minutes of moderate physical activity. Acknowledgments: This work was supported by Fundação para a Ciência e Tecnologia FCT- Portugal and CENTRO2020 under the scope of MOVIDA project: 02/SAICT/2016 – 23878.

Keywords: physical activity, mHealth, urban training circuits, health promotion

Procedia PDF Downloads 146
152 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy

Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos

Abstract:

The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.

Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays

Procedia PDF Downloads 148
151 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell

Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard

Abstract:

Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.

Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9

Procedia PDF Downloads 67
150 Mapping Contested Sites - Permanence Of The Temporary Mouttalos Case Study

Authors: M. Hadjisoteriou, A. Kyriacou Petrou

Abstract:

This paper will discuss ideas of social sustainability in urban design and human behavior in multicultural contested sites. It will focus on the potential of the re-reading of the “site” through mapping that acts as a research methodology and will discuss the chosen site of Mouttalos, Cyprus as a place of multiple identities. Through a methodology of mapping using a bottom up approach, a process of disassembling derives that acts as a mechanism to re-examine space and place by searching for the invisible and the non-measurable, understanding the site through its detailed inhabitation patterns. The significance of this study lies in the use of mapping as an active form of thinking rather than a passive process of representation that allows for a new site to be discovered, giving multiple opportunities for adaptive urban strategies and socially engaged design approaches. We will discuss the above thematic based on the chosen contested site of Mouttalos, a small Turkish Cypriot neighbourhood, in the old centre of Paphos (Ktima), SW of Cyprus. During the political unrest, between Greek and Turkish Cypriot communities, in 1963, the area became an enclave to the Turkish Cypriots, excluding any contact with the rest of the area. Following the Turkish invasion of 1974, the residents left their homes, plots and workplaces, resettling in the North of Cyprus. Greek Cypriot refugees moved into the area. The presence of the Greek Cypriot refugees is still considered to be a temporary resettlement. The buildings and the residents themselves exist in a state of uncertainty. The site is documented through a series of parallel investigations into the physical conditions and history of the site. Research methodologies use the process of mapping to expose the complex and often invisible layers of information that coexist. By registering the site through the subjective experiences, and everyday stories of inhabitants, a series of cartographic recordings reveals the space between: happening and narrative and especially space between different cultures and religions. Research put specific emphasis on engaging the public, promoting social interaction, identifying spatial patterns of occupation by previous inhabitants through social media. Findings exposed three main areas of interest. Firstly we identified inter-dependent relationships between permanence and temporality, characterised by elements such us, signage through layers of time, past events and periodical street festivals, unfolding memory and belonging. Secondly issues of co-ownership and occupation, found through particular narratives of exchange between the two communities and through appropriation of space. Finally formal and informal inhabitation of space, revealed through the presence of informal shared back yards, alternative paths, porous street edges and formal and informal landmarks. The importance of the above findings, was achieving a shift of focus from the built infrastructure to the soft network of multiple and complex relations of dependence and autonomy. Proposed interventions for this contested site were informed and led by a new multicultural identity where invisible qualities were revealed though the process of mapping, taking on issues of layers of time, formal and informal inhabitation and the “permanence of the temporary”.

Keywords: contested sites, mapping, social sustainability, temporary urban strategies

Procedia PDF Downloads 393
149 Time Lag Analysis for Readiness Potential by a Firing Pattern Controller Model of a Motor Nerve System Considered Innervation and Jitter

Authors: Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh

Abstract:

Human makes preparation called readiness potential unconsciously (RP) before awareness of their own decision. For example, when recognizing a button and pressing the button, the RP peaks are observed 200 ms before the initiation of the movement. It has been known that the preparatory movements are acquired before actual movements, but it has not been still well understood how humans can obtain the RP during their growth. On the proposition of why the brain must respond earlier, we assume that humans have to adopt the dangerous environment to survive and then obtain the behavior to cover the various time lags distributed in the body. Without RP, humans cannot take action quickly to avoid dangerous situations. In taking action, the brain makes decisions, and signals are transmitted through the Spinal Cord to the muscles to the body moves according to the laws of physics. Our research focuses on the time lag of the neuron signal transmitting from a brain to muscle via a spinal cord. This time lag is one of the essential factors for readiness potential. We propose a firing pattern controller model of a motor nerve system considered innervation and jitter, which produces time lag. In our simulation, we adopt innervation and jitter in our proposed muscle-skeleton model, because these two factors can create infinitesimal time lag. Q10 Hodgkin Huxley model to calculate action potentials is also adopted because the refractory period produces a more significant time lag for continuous firing. Keeping constant power of muscle requires cooperation firing of motor neurons because a refractory period stifles the continuous firing of a neuron. One more factor in producing time lag is slow or fast-twitch. The Expanded Hill Type model is adopted to calculate power and time lag. We will simulate our model of muscle skeleton model by controlling the firing pattern and discuss the relationship between the time lag of physics and neurons. For our discussion, we analyze the time lag with our simulation for knee bending. The law of inertia caused the most influential time lag. The next most crucial time lag was the time to generate the action potential induced by innervation and jitter. In our simulation, the time lag at the beginning of the knee movement is 202ms to 203.5ms. It means that readiness potential should be prepared more than 200ms before decision making.

Keywords: firing patterns, innervation, jitter, motor nerve system, readiness potential

Procedia PDF Downloads 803
148 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 144
147 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model

Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro

Abstract:

The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.

Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion

Procedia PDF Downloads 356
146 Hawaii, Colorado, and Netherlands: A Comparative Analysis of the Respective Space Sectors

Authors: Mclee Kerolle

Abstract:

For more than 50 years, the state of Hawaii has had the beginnings of a burgeoning commercial aerospace presence statewide. While Hawaii provides the aerospace industry with unique assets concerning geographic location, lack of range safety issues and other factors critical to aerospace development, Hawaii’s strategy and commitment for aerospace have been unclear. For this reason, this paper presents a comparative analysis of Hawaii’s space sector with two of the world’s leading space sectors, Colorado and the Netherlands, in order to provide a strategic plan that establishes a firm position going forward to support Hawaii’s aerospace development statewide. This plan will include financial and other economic incentives legislatively supported by the State to help grow and diversify Hawaii’s aerospace sector. The first part of this paper will examine the business model adopted by the Colorado Space Coalition (CSC), a group of industry stakeholders working to make Colorado a center of excellence for aerospace, as blueprint for growth in Hawaii’s space sector. The second section of this paper will examine the business model adopted by the Netherlands Space Business Incubation Centre (NSBIC), a European Space Agency (ESA) affiliated program that offers business support for entrepreneurs to turn space-connected business ideas into commercial companies. This will serve as blueprint to incentivize space businesses to launch and develop in Hawaii. The third section of this paper will analyze the current policies both CSC, and NSBIC implores to promote industry expansion and legislative advocacy. The final section takes the findings from both space sectors and applies their most adaptable features to a Hawaii specific space business model that takes into consideration the unique advantage and disadvantages found in developing Hawaii’s space sector. The findings of this analysis will show that the development of a strategic plan based on a comparative analysis that creates high technology jobs and new pathways for a trained workforce in the space sector, as well as elicit state support and direction, will achieve the goal of establishing Hawaii as a center of space excellence. This analysis will also serve as a signal to the federal, private sector and international community that Hawaii is indeed serious about developing its’ aerospace industry. Ultimately this analysis and subsequent aerospace development plan will serve as a blueprint for the benefit of all space-faring nations seeking to develop their space sectors.

Keywords: Colorado, Hawaii, Netherlands, space policy

Procedia PDF Downloads 142
145 Rebuilding Beyond Bricks: The Environmental Psychological Foundations of Community Healing After the Lytton Creek Fire

Authors: Tugba Altin

Abstract:

In a time characterized by escalating climate change impacts, communities globally face extreme events with deep-reaching tangible and intangible consequences. At the intersection of these phenomena lies the profound impact on the cultural and emotional connections that individuals forge with their environments. This study casts a spotlight on the Lytton Creek Fire of 2021, showcasing it as an exemplar of both the visible destruction brought by such events and the more covert yet deeply impactful disturbances to place attachment (PA). Defined as the emotional and cognitive bond individuals form with their surroundings, PA is critical in comprehending how such catastrophic events reshape cultural identity and the bond with the land. Against the stark backdrop of the Lytton Creek Fire's devastation, the research seeks to unpack the multilayered dynamics of PA amidst the tangible wreckage and the intangible repercussions such as emotional distress and disrupted cultural landscapes. Delving deeper, it examines how affected populations renegotiate their affiliations with these drastically altered environments, grappling with both the tangible loss of their homes and the intangible challenges to solace, identity, and community cohesion. This exploration is instrumental in the broader climate change narrative, as it offers crucial insights into how these personal-place relationships can influence and shape climate adaptation and recovery strategies. Departing from traditional data collection methodologies, this study adopts an interpretive phenomenological approach enriched by hermeneutic insights and places the experiences of the Lytton community and its co-researchers at its core. Instead of conventional interviews, innovative methods like walking audio sessions and photo elicitation are employed. These techniques allow participants to immerse themselves back into the environment, reviving and voicing their memories and emotions in real-time. Walking audio captures reflections on spatial narratives after the trauma, whereas photo voices encapsulate the intangible emotions, presenting a visual representation of place-based experiences. Key findings emphasize the indispensability of addressing both the tangible and intangible traumas in community recovery efforts post-disaster. The profound changes to the cultural landscape and the subsequent shifts in PA underscore the need for holistic, culturally attuned, and emotionally insightful adaptation strategies. These strategies, rooted in the lived experiences and testimonies of the affected individuals, promise more resonant and effective recovery efforts. The research further contributes to climate change discourse, highlighting the intertwined pathways of tangible reconstruction and the essentiality of emotional and cultural rejuvenation. Furthermore, the use of participatory methodologies in this inquiry challenges traditional research paradigms, pointing to potential evolutionary shifts in qualitative research norms. Ultimately, this study underscores the need for a more integrative approach in addressing the aftermath of environmental disasters, ensuring that both physical and emotional rebuilding are given equal emphasis.

Keywords: place attachment, community recovery, disaster reponse, sensory responses, intangible traumas, visual methodologies

Procedia PDF Downloads 34
144 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 57
143 Innovative Preparation Techniques: Boosting Oral Bioavailability of Phenylbutyric Acid Through Choline Salt-Based API-Ionic Liquids and Therapeutic Deep Eutectic Systems

Authors: Lin Po-Hsi, Sheu Ming-Thau

Abstract:

Urea cycle disorders (UCD) are rare genetic metabolic disorders that compromise the body's urea cycle. Sodium phenylbutyrate (SPB) is a medication commonly administered in tablet or powder form to lower ammonia levels. Nonetheless, its high sodium content poses risks to sodium-sensitive UCD patients. This necessitates the creation of an alternative drug formulation to mitigate sodium load and optimize drug delivery for UCD patients. This study focused on crafting a novel oral drug formulation for UCD, leveraging choline bicarbonate and phenylbutyric acid. The active pharmaceutical ingredient-ionic liquids (API-ILs) and therapeutic deep eutectic systems (THEDES) were formed by combining these with choline chloride. These systems display characteristics like maintaining a liquid state at room temperature and exhibiting enhanced solubility. This in turn amplifies drug dissolution rate, permeability, and ultimately oral bioavailability. Incorporating choline-based phenylbutyric acid as a substitute for traditional SPB can effectively curtail the sodium load in UCD patients. Our in vitro dissolution experiments revealed that the ILs and DESs, synthesized using choline bicarbonate and choline chloride with phenylbutyric acid, surpassed commercial tablets in dissolution speed. Pharmacokinetic evaluations in SD rats indicated a notable uptick in the oral bioavailability of phenylbutyric acid, underscoring the efficacy of choline salt ILs in augmenting its bioavailability. Additional in vitro intestinal permeability tests on SD rats authenticated that the ILs, formulated with choline bicarbonate and phenylbutyric acid, demonstrate superior permeability compared to their sodium and acid counterparts. To conclude, choline salt ILs developed from choline bicarbonate and phenylbutyric acid present a promising avenue for UCD treatment, with the added benefit of reduced sodium load. They also hold merit in formulation engineering. The sustained-release capabilities of DESs position them favorably for drug delivery, while the low toxicity and cost-effectiveness of choline chloride signal potential in formulation engineering. Overall, this drug formulation heralds a prospective therapeutic avenue for UCD patients.

Keywords: phenylbutyric acid, sodium phenylbutyrate, choline salt, ionic liquids, deep eutectic systems, oral bioavailability

Procedia PDF Downloads 78
142 Understanding the Role of Concussions as a Risk Factor for Multiple Sclerosis

Authors: Alvin Han, Reema Shafi, Alishba Afaq, Jennifer Gommerman, Valeria Ramaglia, Shannon E. Dunn

Abstract:

Adolescents engaged in contact-sports can suffer from recurrent brain concussions with no loss of consciousness and no need for hospitalization, yet they face the possibility of long-term neurocognitive problems. Recent studies suggest that head concussive injuries during adolescence can also predispose individuals to multiple sclerosis (MS). The underlying mechanisms of how brain concussions predispose to MS is not understood. Here, we hypothesize that: (1) recurrent brain concussions prime microglial cells, the tissue resident myeloid cells of the brain, setting them up for exacerbated responses when exposed to additional challenges later in life; and (2) brain concussions lead to the sensitization of myelin-specific T cells in the peripheral lymphoid organs. Towards addressing these hypotheses, we implemented a mouse model of closed head injury that uses a weight-drop device. First, we calibrated the model in male 12 week-old mice and established that a weight drop from a 3 cm height induced mild neurological symptoms (mean neurological score of 1.6+0.4 at 1 hour post-injury) from which the mice fully recovered by 72 hours post-trauma. Then, we performed immunohistochemistry on the brain of concussed mice at 72 hours post-trauma. Despite mice having recovered from all neurological symptoms, immunostaining for leukocytes (CD45) and IBA-1 revealed no peripheral immune infiltration, but an increase in the intensity of IBA1+ staining compared to uninjured controls, suggesting that resident microglia had acquired a more active phenotype. This microglia activation was most apparent in the white matter tracts in the brain and in the olfactory bulb. Immunostaining for the microglia-specific homeostatic marker TMEM119, showed a reduction in TMEM119+ area in the brain of concussed mice compared to uninjured controls, confirming a loss of this homeostatic signal by microglia after injury. Future studies will test whether single or repetitive concussive injury can worsen or accelerate autoimmunity in male and female mice. Understanding these mechanisms will guide the development of timed and targeted therapies to prevent MS from getting started in people at risk.

Keywords: concussion, microglia, microglial priming, multiple sclerosis

Procedia PDF Downloads 75
141 Significant Factor of Magnetic Resonance for Survival Outcome in Rectal Cancer Patients Following Neoadjuvant Combined Chemotherapy and Radiation Therapy: Stratification of Lateral Pelvic Lymph Node

Authors: Min Ju Kim, Beom Jin Park, Deuk Jae Sung, Na Yeon Han, Kichoon Sim

Abstract:

Purpose: The purpose of this study is to determine the significant magnetic resonance (MR) imaging factors of lateral pelvic lymph node (LPLN) on the assessment of survival outcomes of neoadjuvant combined chemotherapy and radiation therapy (CRT) in patients with mid/low rectal cancer. Materials and Methods: The institutional review board approved this retrospective study of 63 patients with mid/low rectal cancer who underwent MR before and after CRT and patient consent was not required. Surgery performed within 4 weeks after CRT. The location of LPLNs was divided into following four groups; 1) common iliac, 2) external iliac, 3) obturator, and 4) internal iliac lymph nodes. The short and long axis diameters, numbers, shape (ovoid vs round), signal intensity (homogenous vs heterogenous), margin (smooth vs irregular), and diffusion-weighted restriction of LPLN were analyzed on pre- and post-CRT images. For treatment response using size, lymph node groups were defined as group 1) short axis diameter ≤ 5mm on both MR, group 2) > 5mm change into ≤ 5mm after CRT, and group 3) persistent size > 5mm before and after CRT. Clinical findings were also evaluated. The disease-free survival and overall survival rate were evaluated and the risk factors for survival outcomes were analyzed using cox regression analysis. Results: Patients in the group 3 (persistent size >5mm) showed significantly lower survival rates than the group 1 and 2 (Disease-free survival rates of 36.1% and 78.8, 88.8%, p < 0.001). The size response (group 1-3), multiplicity of LPLN, the level of carcinoembryonic antigen (CEA), patient’s age, T and N stage, vessel invasion, perineural invasion were significant factors affecting disease-free survival rate or overall survival rate using univariate analysis (p < 0.05). The persistent size (group 3) and multiplicity of LPLN were independent risk factors among MR imaging features influencing disease-free survival rate (HR = 10.087, p < 0.05; HR = 4.808, p < 0.05). Perineural invasion and T stage were shown as independent histologic risk factors (HR = 16.594, p < 0.05; HR = 15.891, p < 0.05). Conclusion: The persistent size greater than 5mm and multiplicity of LPLN on both pre- and post-MR after CRT were significant MR factors affecting survival outcomes in the patients with mid/low rectal cancer.

Keywords: rectal cancer, MRI, lymph node, combined chemoradiotherapy

Procedia PDF Downloads 120
140 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 147
139 A Multimodal Discourse Analysis of Gender Representation on Health and Fitness Magazine Cover Pages

Authors: Nashwa Elyamany

Abstract:

In visual cultures, namely that of the United States, media representations are such influential and pervasive reflections of societal norms and expectations to the extent that they impact the manner in which both genders view themselves. Health and fitness magazines fall within the realm of visual culture. Since the main goal of communication is to ensure proper dissemination of information in order for the target audience to grasp the intended messages, it becomes imperative that magazine publishers, editors, advertisers and image producers use different modes of communication within their reach to convey messages to their readers and viewers. A rapid waxing flow of multimodality floods popular discourse, particularly health and fitness magazine cover pages. The use of well-crafted cover lines and visual images is imbued with agendas, consumerist ideologies and properties capable of effectively conveying implicit and explicit meaning to potential readers and viewers. In essence, the primary goal of this thesis is to interrogate the multi-semiotic operations and manifestations of hegemonic masculinity and femininity in male and female body culture, particularly on the cover pages of the twin American magazines Men's Health and Women's Health using corpora that spanned from 2011 to the mid of 2016. The researcher explores the semiotic resources that contribute to shaping and legitimizing a new form of postmodern, consumerist, gendered discourse that positions the reader-viewer ideologically. Methodologically, the researcher carries out analysis on the macro and micro levels. On the macro level, the researcher takes on a critical stance to illuminate the ideological nature of the multimodal ensemble of the cover pages, and, on the micro level, seeks to put forward new theoretical and methodological routes through which the semiotic choices well invested on the media texts can be more objectively scrutinized. On the macro level, a 'themes' analysis is initially conducted to isolate the overarching themes that dominate the fitness discourse on the cover pages under study. It is argued that variation in terms of frequencies of such themes is indicative, broadly speaking, of which facets of hegemonic masculinity and femininity are infused in the fitness discourse on the cover pages. On the micro level, this research work encompasses three sub-levels of analysis. The researcher follows an SF-MMDA approach, drawing on a trio of analytical frameworks: Halliday's SFG for the verbal analysis; Kress & van Leeuween's VG for the visual analysis; and CMT in relation to Sperber & Wilson's RT for the pragma-cognitive analysis of multimodal metaphors and metonymies. The data is presented in terms of detailed descriptions in conjunction with frequency tables, ANOVA with alpha=0.05 and MANOVA in the multiple phases of analysis. Insights and findings from this multi-faceted, social-semiotic analysis are interpreted in light of Cultivation Theory, Self-objectification Theory and the literature to date. Implications for future research include the implementation of a multi-dimensional approach whereby linguistic and visual analytical models are deployed with special regards to cultural variation.

Keywords: gender, hegemony, magazine cover page, multimodal discourse analysis, multimodal metaphor, multimodal metonymy, systemic functional grammar, visual grammar

Procedia PDF Downloads 317
138 Palynological Investigation and Quality Determination of Honeys from Some Apiaries in Northern Nigeria

Authors: Alebiosu Olugbenga Shadrak, Victor Victoria

Abstract:

Honey bees exhibit preferences in their foraging behaviour on pollen and nectar for food and honey production, respectively. Melissopalynology is the study of pollen in honey and other honey products. Several work have been conducted on the palynological studies of honeys from the southern parts of Nigeria but with relatively scant records from the Northern region of the country. This present study aimed at revealing the favourably visited plants by honey bees, Apis melifera var. adansonii, at some apiaries in Northern Nigeria, as well as determining the quality of honeys produced. Honeys were harvested and collected from four apiaries of the region, namely: Sarkin Dawa missionary bee farm, Taraba State; Eleeshuwa Bee Farm, Keffi, Nassarawa State, Bulus Beekeeper Apiaries, Kagarko, Kaduna State and Mai Gwava Bee Farm, Kano State. These honeys were acetolysed for palynological microscopic analysis and subjected to standard treatment methods for the determination of their proximate composition and sugar profiling. Fresh anthers of two dominantly represented plants in the honeys were then collected for the quantification of their pollen protein contents, using the micro-kjeldhal procedure. A total of 30 pollen types were identified in the four honeys, and some of them were common to the honeys. A classification method for expressing pollen frequency class was employed: Senna cf. siamea, Terminalia cf. catappa, Mangifera indica, Parinari curatelifolia, Vitellaria paradoxa, Elaeis guineensis, Parkia biglobosa, Phyllantus muellerianus and Berlina Grandiflora, as “Frequent” (16-45%); while the others are either Rare (3-15%) or Sporadic (less than 3 %). Pollen protein levels of the two abundantly represented plants, Senna siamea (15.90mg/ml) and Terminalia catappa (17.33mg/ml) were found to be considerably lower. The biochemical analyses revealed varying amounts of proximate composition, non-reducing sugar and total sugar levels in the honeys. The results of this study indicate that pollen and nectar of the “Frequent” plants were preferentially foraged by honeybees in the apiaries. The estimated pollen protein contents of Senna same and Terminalia catappa were considerably lower and not likely to have influenced their favourable visitation by honeybees. However, a relatively higher representation of Senna cf. siamea in the pollen spectrum might have resulted from its characteristic brightly coloured and well scented flowers, aiding greater entomophily. Terminalia catappa, Mangifera indica, Elaeis guineensis, Vitellaria paradoxa, and Parkia biglobosa are typical food crops; hence they probably attracted the honeybees owing to the rich nutritional values of their fruits and seeds. Another possible reason for a greater entomophily of the favourably visited plants are certain nutritional constituents of their pollen and nectar, which were not investigated in this study. The nutritional composition of the honeys was observed to fall within the safe limits of international norms, as prescribed by Codex Alimentarius Commission, thus they are good honeys for human consumption. It is therefore imperative to adopt strategic conservation steps in ensuring that these favourably visited plants are protected from indiscriminate anthropogenic activities and also encourage apiarists in the country to establish their bee farms more proximally to the plants for optimal honey yield.

Keywords: honeybees, melissopalynology, preferentially foraged, nutritional, bee farms, proximally

Procedia PDF Downloads 249
137 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 94
136 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 511