Search results for: vibration signals
245 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain
Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae
Abstract:
Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS
Procedia PDF Downloads 46244 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 159243 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 126242 Seismic Response of Structure Using a Three Degree of Freedom Shake Table
Authors: Ketan N. Bajad, Manisha V. Waghmare
Abstract:
Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed
Procedia PDF Downloads 140241 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design
Procedia PDF Downloads 213240 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 137239 Analysis of Cell Cycle Status in Radiation Non-Targeted Hepatoma Cells Using Flow Cytometry: Evidence of Dose Dependent Response
Authors: Sharmi Mukherjee, Anindita Chakraborty
Abstract:
Cellular irradiation incites complex responses including arrest of cell cycle progression. This article accentuates the effects of radiation on cell cycle status of radiation non-targeted cells. Human Hepatoma HepG2 cells were exposed to increasing doses of γ radiations (1, 2, 4, 6 Gy) and their cell culture media was transferred to non-targeted HepG2 cells cultured in other Petri plates. These radiation non-targeted cells cultured in the ICCM (Irradiated cell conditioned media) were the bystander cells on which cell cycle analysis was performed using flow cytometry. An apparent decrease in the distribution of bystander cells at G0/G1 phase was observed with increased radiation doses upto 4 Gy representing a linear relationship. This was accompanied by a gradual increase in cellular distribution at G2/M phase. Interestingly the number of cells in G2/M phase at 1 and 2 Gy irradiation was not significantly different from each other. However, the percentage of G2 phase cells at 4 and 6 Gy doses were significantly higher than 2 Gy dose indicating the IC50 dose to be between 2 and 4 Gy. Cell cycle arrest is an indirect indicator of genotoxic damage in cells. In this study, bystander stress signals through the cell culture media of irradiated cells disseminated the radiation induced DNA damages in the non-targeted cells which resulted in arrest of the cell cycle progression at G2/M phase checkpoint. This implies that actual radiation biological effects represent a penumbra with effects encompassing a larger area than the actual beam. This article highlights the existence of genotoxic damages as bystander effects of γ rays in human Hepatoma cells by cell cycle analysis and opens up avenues for appraisal of bystander stress communications between tumor cells. Contemplation of underlying signaling mechanisms can be manipulated to maximize damaging effects of radiation with minimum dose and thus has therapeutic applications.Keywords: bystander effect, cell cycle, genotoxic damage, hepatoma
Procedia PDF Downloads 184238 Structural Health Monitoring-Integrated Structural Reliability Based Decision Making
Authors: Caglayan Hizal, Kutay Yuceturk, Ertugrul Turker Uzun, Hasan Ceylan, Engin Aktas, Gursoy Turan
Abstract:
Monitoring concepts for structural systems have been investigated by researchers for decades since such tools are quite convenient to determine intervention planning of structures. Despite the considerable development in this regard, the efficient use of monitoring data in reliability assessment, and prediction models are still in need of improvement in their efficiency. More specifically, reliability-based seismic risk assessment of engineering structures may play a crucial role in the post-earthquake decision-making process for the structures. After an earthquake, professionals could identify heavily damaged structures based on visual observations. Among these, it is hard to identify the ones with minimum signs of damages, even if they would experience considerable structural degradation. Besides, visual observations are open to human interpretations, which make the decision process controversial, and thus, less reliable. In this context, when a continuous monitoring system has been previously installed on the corresponding structure, this decision process might be completed rapidly and with higher confidence by means of the observed data. At this stage, the Structural Health Monitoring (SHM) procedure has an important role since it can make it possible to estimate the system reliability based on a recursively updated mathematical model. Therefore, integrating an SHM procedure into the reliability assessment process comes forward as an important challenge due to the arising uncertainties for the updated model in case of the environmental, material and earthquake induced changes. In this context, this study presents a case study on SHM-integrated reliability assessment of the continuously monitored progressively damaged systems. The objective of this study is to get instant feedback on the current state of the structure after an extreme event, such as earthquakes, by involving the observed data rather than the visual inspections. Thus, the decision-making process after such an event can be carried out on a rational basis. In the near future, this can give wing to the design of self-reported structures which can warn about its current situation after an extreme event.Keywords: condition assessment, vibration-based SHM, reliability analysis, seismic risk assessment
Procedia PDF Downloads 143237 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 75236 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280235 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials
Authors: Sheikh Omar Sillah
Abstract:
Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring
Procedia PDF Downloads 77234 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 253233 Rapid Formation of Ortho-Boronoimines and Derivatives for Reversible and Dynamic Bioconjugation Under Physiological Conditions
Authors: Nicholas C. Rose, Christopher D. Spicer
Abstract:
The regeneration of damaged or diseased tissues would provide an invaluable therapeutic tool in biological research and medicine. Cells must be provided with a number of different biochemical signals in order to form mature tissue through complex signaling networks that are difficult to recreate in synthetic materials. The ability to attach and detach bioactive proteins from material in an iterative and dynamic manner would therefore present a powerful way to mimic natural biochemical signaling cascades for tissue growth. We propose to reversibly attach these bioactive proteins using ortho-boronoimine (oBI) linkages and related derivatives formed by the reaction of an ortho-boronobenzaldehyde with a nucleophilic amine derivative. To enable the use of oBIs for biomaterial modification, we have studied binding and cleavage processes with precise detail in the context of small molecule models. A panel of oBI complexes has been synthesized and screened using a novel Förster resonance energy transfer (FRET) assay, using a cyanine dye FRET pair (Cy3 and Cy5), to identify the most reactive boron-aldehyde/amine nucleophile pairs. Upon conjugation of the dyes, FRET occurs under Cy3 excitation and the resultant ratio of Cy3:Cy5 emission directly correlates to conversion. Reaction kinetics and equilibria can be accurately quantified for reactive pairs, with dissociation constants of oBI derivatives in water (KD) found to span 9-orders of magnitude (10⁻²-10⁻¹¹ M). These studies have provided us with a better understanding of oBI linkages that we hope to exploit to reversibly attach bioconjugates to materials. The long-term aim of the project is to develop a modular biomaterial platform that can be used to help combat chronic diseases such as osteoarthritis, heart disease, and chronic wounds by providing cells with potent biological stimuli for tissue engineering.Keywords: dynamic, bioconjugation, bornoimine, rapid, physiological
Procedia PDF Downloads 96232 Photocatalytic Active Surface of LWSCC Architectural Concretes
Authors: P. Novosad, L. Osuska, M. Tazky, T. Tazky
Abstract:
Current trends in the building industry are oriented towards the reduction of maintenance costs and the ecological benefits of buildings or building materials. Surface treatment of building materials with photocatalytic active titanium dioxide added into concrete can offer a good solution in this context. Architectural concrete has one disadvantage – dust and fouling keep settling on its surface, diminishing its aesthetic value and increasing maintenance e costs. Concrete surface – silicate material with open porosity – fulfils the conditions of effective photocatalysis, in particular, the self-cleaning properties of surfaces. This modern material is advantageous in particular for direct finishing and architectural concrete applications. If photoactive titanium dioxide is part of the top layers of road concrete on busy roads and the facades of the buildings surrounding these roads, exhaust fumes can be degraded with the aid of sunshine; hence, environmental load will decrease. It is clear that options for removing pollutants like nitrogen oxides (NOx) must be found. Not only do these gases present a health risk, they also cause the degradation of the surfaces of concrete structures. The photocatalytic properties of titanium dioxide can in the long term contribute to the enhanced appearance of surface layers and eliminate harmful pollutants dispersed in the air, and facilitate the conversion of pollutants into less toxic forms (e.g., NOx to HNO3). This paper describes verification of the photocatalytic properties of titanium dioxide and presents the results of mechanical and physical tests on samples of architectural lightweight self-compacting concretes (LWSCC). The very essence of the use of LWSCC is their rheological ability to seep into otherwise extremely hard accessible or inaccessible construction areas, or sections thereof where concrete compacting will be a problem, or where vibration is completely excluded. They are also able to create a solid monolithic element with a large variety of shapes; the concrete will at the same meet the requirements of both chemical aggression and the influences of the surrounding environment. Due to their viscosity, LWSCCs are able to imprint the formwork elements into their structure and thus create high quality lightweight architectural concretes.Keywords: photocatalytic concretes, titanium dioxide, architectural concretes, Lightweight Self-Compacting Concretes (LWSCC)
Procedia PDF Downloads 295231 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 22230 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks
Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev
Abstract:
One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications
Procedia PDF Downloads 217229 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients
Authors: Bliss Singhal
Abstract:
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels
Procedia PDF Downloads 84228 Quantifying the Impact of Intermittent Signal Priority given to BRT on Ridership and Climate-A Case Study of Ahmadabad
Authors: Smita Chaudhary
Abstract:
Traffic in India are observed uncontrolled, and are characterized by chaotic (not follows the lane discipline) traffic situation. Bus Rapid Transit (BRT) has emerged as a viable option to enhance transportation capacity and provide increased levels of mobility and accessibility. At present in Ahmadabad there are as many intersections which face the congestion and delay at signalized intersection due to transit (BRT) lanes. Most of the intersection in spite of being signalized is operated manually due to the conflict between BRT buses and heterogeneous traffic. Though BRTS in Ahmadabad has an exclusive lane of its own but with this comes certain limitations which Ahmadabad is facing right now. At many intersections in Ahmadabad due to these conflicts, interference, and congestion both heterogeneous traffic as well as transit buses suffer traffic delays of remarkable 3-4 minutes at each intersection which has a become an issue of great concern. There is no provision of BRT bus priority due to which existing signals have their least role to play in managing the traffic that ultimately call for manual operation. There is an immense decrement in the daily ridership of BRTS because people are finding this transit mode no more time saving in their routine, there is an immense fall in ridership ultimately leading to increased number of private vehicles, idling of vehicles at intersection cause air and noise pollution. In order to bring back these commuters’ transit facilities need to be improvised. Classified volume count survey, travel time delay survey was conducted and revised signal design was done for whole study stretch having three intersections and one roundabout, later one intersection was simulated in order to see the effect of giving priority to BRT on side street queue length and travel time for heterogeneous traffic. This paper aims at suggesting the recommendations in signal cycle, introduction of intermittent priority for transit buses, simulation of intersection in study stretch with proposed signal cycle using VISSIM in order to make this transit amenity feasible and attracting for commuters in Ahmadabad.Keywords: BRT, priority, Ridership, Signal, VISSIM
Procedia PDF Downloads 441227 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System
Authors: Imran Dayan, Ashiqul Khan
Abstract:
Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining
Procedia PDF Downloads 335226 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes
Authors: Lucas Paganin, Viliam Makis
Abstract:
With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart
Procedia PDF Downloads 91225 The Role of Middle Managers SBU's in Context of Change: Sense-Making Approach
Authors: Hala Alioua, Alberic Tellier
Abstract:
This paper is designed to spotlight the research on corporate strategic planning, by emphasizing the role of middle manager of SBU’s and related issues such as the context of vision change. Previous research on strategic vision has been focused principally at the SME, with relatively limited consideration given to the role of middle managers SBU’s in the context of change. This project of research has been done by using a single case study. We formulated through our immersion for 2.5 years on the ground and by a qualitative method and abduction approach. This entity that we analyze is a subsidiary of multinational companies headquartered in Germany, specialized in manufacturing automotive equipment. The "Delta Company" is a French manufacturing plant that has undergone numerous changes over the past three years. The two major strategic changes that have a significant impact on the Delta plant are the strengths of its core business through « lead plant strategy» in 2011 and the implementation of a new strategic vision in 2014. These consecutive changes impact the purpose of the mission of the middle managers. The plant managers ask the following questions: How the middle managers make sense of the corporate strategic planning imposed by the parent company? How they appropriate the new vision and decline it into actions on the ground? We chose the individual interview technique through open-ended questions as the source of data collection. We first of all carried out an exploratory approach by interviewing 8 members of the Management committee’s decision and 19 heads of services. The first findings and results show that exist a divergence of opinion and interpretations of the corporate strategic planning among organization members and there are difficulties to make sense and interpretations of the signals of the environment. The lead plant strategy enables new projects which insure the workload of Delta Company. Nevertheless, it creates a tension and stress among the middle managers because its provoke lack of resources to the detriment of their main jobs as manufacturer plant. The middle managers does not have a clear vision and they are wondering if the new strategic vision means more autonomy and less support from the group.Keywords: change, middle managers, vision, sensemaking
Procedia PDF Downloads 401224 Chemotrophic Signal Exchange between the Host Plant Helianthemum sessiliflorum and Terfezia boudieri
Authors: S. Ben-Shabat, T. Turgeman, O. Leubinski, N. Roth-Bejerano, V. Kagan-Zur, Y. Sitrit
Abstract:
The ectomycorrhizal (ECM) desert truffle Terfezia boudieri produces edible fruit bodies and forms symbiosis with its host plant Helianthemum sessiliflorum (Cistaceae) in the Negev desert of Israel. The symbiosis is vital for both partners' survival under desert conditions. Under desert habitat conditions, ECMs must form symbiosis before entering the dry season. To secure a successful encounter, in the course of evolution, both partners have responded by evolving special signals exchange that facilitates recognition. Members of the Cistaceae family serve as host plants for many important truffles. Conceivably, during evolution a common molecule present in Cistaceae plants was recruited to facilitate successful encounter with ectomycorrhizas. Arbuscular vesicular fungi (AM) are promiscuous in host preferences, in contrast, ECM fungi show specificity to host plants. Accordingly, we hypothesize that H. sessiliflorum secretes a chemotrophic-signaling, which is common to plants hosting ECM fungi belonging to the Pezizales. However, thus far no signaling molecules have been identified in ECM fungi. We developed a bioassay for chemotrophic activity. Fractionation of root exudates revealed a substance with chemotrophic activity and molecular mass of 534. Following the above concept, screening the transcriptome of Terfezia, grown under chemoattraction, discovered genes showing high homology to G proteins-coupled receptors of plant pathogens involved in positive chemotaxis and chemotaxis suppression. This study aimed to identify the active molecule using analytical methods (LC-MS, NMR etc.). This should contribute to our understanding of how ECM fungi communicate with their hosts in the rhizosphere. In line with the ability of Terfezia to form also endomycorrhizal symbiosis like AM fungi, analysis of the mechanisms may likewise be applicable to AM fungi. Developing methods to manipulate fungal growth by the chemoattractant can open new ways to improve inoculation of plants.Keywords: chemotrophic signal, Helianthemum sessiliflorum, Terfezia boudieri, ECM
Procedia PDF Downloads 409223 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 110222 Neurofeedback for Anorexia-RelaxNeuron-Aimed in Dissolving the Root Neuronal Cause
Authors: Kana Matsuyanagi
Abstract:
Anorexia Nervosa (AN) is a psychiatric disorder characterized by a relentless pursuit of thinness and strict restriction of food. The current therapeutic approaches for AN predominantly revolve around outpatient psychotherapies, which create significant financial barriers for the majority of affected patients, hindering their access to treatment. Nonetheless, AN exhibit one of the highest mortality and relapse rates among psychological disorders, underscoring the urgent need to provide patients with an affordable self-treatment tool, enabling those unable to access conventional medical intervention to address their condition autonomously. To this end, a neurofeedback software, termed RelaxNeuron, was developed with the objective of providing an economical and portable means to aid individuals in self-managing AN. Electroencephalography (EEG) was chosen as the preferred modality for RelaxNeuron, as it aligns with the study's goal of supplying a cost-effective and convenient solution for addressing AN. The primary aim of the software is to ameliorate the negative emotional responses towards food stimuli and the accompanying aberrant eye-tracking patterns observed in AN patient, ultimately alleviating the profound fear towards food an elemental symptom and, conceivably, the fundamental etiology of AN. The core functionality of RelaxNeuron hinges on the acquisition and analysis of EEG signals, alongside an electrocardiogram (ECG) signal, to infer the user's emotional state while viewing dynamic food-related imagery on the screen. Moreover, the software quantifies the user's performance in accurately tracking the moving food image. Subsequently, these two parameters undergo further processing in the subsequent algorithm, informing the delivery of either negative or positive feedback to the user. Preliminary test results have exhibited promising outcomes, suggesting the potential advantages of employing RelaxNeuron in the treatment of AN, as evidenced by its capacity to enhance emotional regulation and attentional processing through repetitive and persistent therapeutic interventions.Keywords: Anorexia Nervosa, fear conditioning, neurofeedback, BCI
Procedia PDF Downloads 44221 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System
Authors: Nicholas Pearce, Eun-jin Kim
Abstract:
Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.Keywords: cardiovascular system, left atrium, numerical model, MEF
Procedia PDF Downloads 115220 Using Multiomic Plasma Profiling From Liquid Biopsies to Identify Potential Signatures for Disease Diagnostics in Late-Stage Non-small Cell Lung Cancer (NSCLC) in Trinidad and Tobago
Authors: Nicole Ramlachan, Samuel Mark West
Abstract:
Lung cancer is the leading cause of cancer-associated deaths in North America, with the vast majority being non-small cell lung cancer (NSCLC), with a five-year survival rate of only 24%. Non-invasive discovery of biomarkers associated with early-diagnosis of NSCLC can enable precision oncology efforts using liquid biopsy-based multiomics profiling of plasma. Although tissue biopsies are currently the gold standard for tumor profiling, this method presents many limitations since these are invasive, risky, and sometimes hard to obtain as well as only giving a limited tumor profile. Blood-based tests provides a less-invasive, more robust approach to interrogate both tumor- and non-tumor-derived signals. We intend to examine 30 stage III-IV NSCLC patients pre-surgery and collect plasma samples.Cell-free DNA (cfDNA) will be extracted from plasma, and next-generation sequencing (NGS) performed. Through the analysis of tumor-specific alterations, including single nucleotide variants (SNVs), insertions, deletions, copy number variations (CNVs), and methylation alterations, we intend to identify tumor-derived DNA—ctDNA among the total pool of cfDNA. This would generate data to be used as an accurate form of cancer genotyping for diagnostic purposes. Using liquid biopsies offer opportunities to improve the surveillance of cancer patients during treatment and would supplement current diagnosis and tumor profiling strategies previously not readily available in Trinidad and Tobago. It would be useful and advantageous to use this in diagnosis and tumour profiling as well as to monitor cancer patients, providing early information regarding disease evolution and treatment efficacy, and reorient treatment strategies in, timethereby improving clinical oncology outcomes.Keywords: genomics, multiomics, clinical genetics, genotyping, oncology, diagnostics
Procedia PDF Downloads 161219 Heterogeneity of Genes Encoding the Structural Proteins of Avian Infectious Bronchitis Virus
Authors: Shahid Hussain Abro, Siamak Zohari, Lena H. M. Renström, Désirée S. Jansson, Faruk Otman, Karin Ullman, Claudia Baule
Abstract:
Infectious bronchitis is an acute, highly contagious respiratory, nephropathogenic and reproductive disease of poultry that is caused by infectious bronchitis virus (IBV). The present study used a large data set of structural gene sequences, including newly generated ones and sequences available in the GenBank database to further analyze the diversity and to identify selective pressures and recombination spots. There were some deletions or insertions in the analyzed regions in isolates of the Italy-02 and D274 genotypes. Whereas, there were no insertions or deletions observed in the isolates of the Massachusetts and 4/91 genotype. The hypervariable nucleotide sequence regions spanned positions 152–239, 554–582, 686–737 and 802–912 in the S1 sub-unit of the all analyzed genotypes. The nucleotide sequence data of the E gene showed that this gene was comparatively unstable and subjected to a high frequency of mutations. The M gene showed substitutions consistently distributed except for a region between nucleotide positions 250–680 that remained conserved. The lowest variation in the nucleotide sequences of ORF5a was observed in the isolates of the D274 genotype. While, ORF5b and N gene sequences showed highly conserved regions and were less subjected to variation. Genes ORF3a, ORF3b, M, ORF5a, ORF5b and N presented negative selective pressure among the analyzed isolates. However, some regions of the ORFs showed favorable selective pressure(s). The S1 and E proteins were subjected to a high rate of mutational substitutions and non-synonymous amino acids. Strong signals of recombination breakpoints and ending break point were observed in the S and N genes. Overall, the results of this study revealed that very likely the strong selective pressures in E, M and the high frequency of substitutions in the S gene can probably be considered the main determinants in the evolution of IBV.Keywords: IBV, avian infectious bronchitis, structural genes, genotypes, genetic diversity
Procedia PDF Downloads 435218 The Role of Group Dynamics in Creativity: A Study Case from Italy
Authors: Sofya Komarova, Frashia Ndungu, Alessia Gavazzoli, Roberta Mineo
Abstract:
Modern society requires people to be flexible and to develop innovative solutions to unexpected situations. Creativity refers to the “interaction among aptitude, process, and the environment by which an individual or group produces a perceptible product that is both novel and useful as defined within a social context”. It allows humans to produce novel ideas, generate new solutions, and express themselves uniquely. Only a few scientific studies have examined group dynamics' influence on individuals' creativity. There exist some gaps in the research on creative thinking, such as the fact that collaborative effort frequently results in the enhanced production of new information and knowledge. Therefore, it is critical to evaluate creativity via social settings. The study aimed at exploring the group dynamics of young adults in small group settings and the influence of these dynamics on their creativity. The study included 30 participants aged 20 to 25 who were attending university after completing a bachelor's degree. The participants were divided into groups of three, in gender homogenous and heterogeneous groups. The groups’ creative task was tied to the Lego mosaic created for the Scintillae laboratory at the Reggio Children Foundation. Group dynamics were operationalized into patterns of behaviors classified into three major categories: 1) Social Interactions, 2) Play, and 3) Distraction. Data were collected through audio and video recording and observation. The qualitative data were converted into quantitative data using the observational coding system; then, they were analyzed, revealing correlations between behaviors using median points and averages. For each participant and group, the percentages of represented behavior signals were computed. The findings revealed a link between social interaction, creative thinking, and creative activities. Other findings revealed that the more intense the social interaction, the lower the amount of creativity demonstrated. This study bridges the research gap between group dynamics and creativity. The approach calls for further research on the relationship between creativity and social interaction.Keywords: group dynamics, creative thinking, creative action, social interactions, group play
Procedia PDF Downloads 127217 The Role of Hypothalamus Mediators in Energy Imbalance
Authors: Maftunakhon Latipova, Feruza Khaydarova
Abstract:
Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.Keywords: obesity, endocrinology, hypothalamus, overweight
Procedia PDF Downloads 76216 Peer Bullying and Mentalization from the Perspective of Pupils
Authors: Anna Siegler
Abstract:
Bullying among peers is not uncommon; however, adults can notice only a fragment of the cases of harassment during everyday life. The systemic approaches of bullying investigation put the whole school community in the focus of attention and propose that the solution should emerge from the culture of the school. Bystanders are essential in the prevention and intervention processes as an active agent rather than passive. For combating exclusion, stigmatization and harassment, it is important that the bystanders have to realize they have the power to take action. To prevent the escalation of violence, victims must believe that students and teachers will help them and their environment is able to provide safety. The study based on scientific narrative psychological approach, and focuses on the examination of the different perspectives of students, how peers are mentalizing with each other in case of bullying. The data collection contained responses of students (N = 138) from three schools in Hungary, and from three different area of the country (Budapest, Martfű and Barcs). The test battery include Bullying Prevalence Questionnaire, Interpersonal Reactivity Index and an instruction to get narratives about bullying, which effectiveness was tested during a pilot test. The obtained results are in line with the findings of previous bullying research: the victims are mentalizing less with their peers and experience greater personal distress when they are in identity threatening situations, thus focusing on their own difficulties rather than social signals. This isolation is an adaptive response in short-term although it seems to lead to a deficit in social skills later in life and makes it difficult for students to become socially integrated to society. In addition the results also show that students use more mental state attribution when they report verbal bullying than in case of physical abuse. Those who witness physical harassment also witness concrete answers to the problem from teachers, in contrast verbal abuse often stays without consequences. According to the results students mentalizing more in these stories because they have less normative explanation to what happened. To expanding bullying literature, this research helps to find ways to reduce school violence through community development.Keywords: bullying, mentalization, narrative, school culture
Procedia PDF Downloads 164