Search results for: error detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5094

Search results for: error detection

414 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 393
413 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 292
412 Clinical Value of 18F-FDG-PET Compared with CT Scan in the Detection of Nodal and Distant Metastasis in Urothelial Carcinoma or Bladder Cancer

Authors: Mohammed Al-Zubaidi, Katherine Ong, Pravin Viswambaram, Steve McCombie, Oliver Oey, Jeremy Ong, Richard Gauci, Ronny Low, Dickon Hayne

Abstract:

Objective: Lymph node involvement along with distant metastasis in a patient with invasive bladder cancer determines the disease survival, therefeor, it is an essential determinant of the therapeutic management and outcome. This retrospective study aims to determine the accuracy of FDG PET scan in detecting lymphatic involvement and distant metastatic urothelial cancer compared to conventional CT staging. Method: A retrospective review of 76 patients with UC or BC who underwent surgery or confirmatory biopsy that was staged with both CT and 18F-FDG-PET (up to 8 weeks apart) between 2015 and 2020. Fifty-sevenpatients (75%) had formal pelvic LN dissection or biopsy of suspicious metastasis. 18F-FDG-PET reports for positive sites were qualitative depending on SUV Max. On the other hand, enlarged LN by RECIST criteria 1.1 (>10 mm) and other qualitative findings suggesting metastasis were considered positive in CT scan. Histopathological findings from surgical specimens or image-guided biopsies were considered the gold standard in comparison to imaging reports. 18F-FDG-avid or enlarged pelvic LNs with surgically proven nodal metastasis were considered true positives. Performance characteristics of 18F-FDG-PET and CT, including sensitivity, specificity, positive predictive value (PPV), and negative predictive value (PPV), were calculated. Results: Pelvic LN involvement was confirmed histologically in 10/57 (17.5%) patients. Sensitivity, specificity, PPV and NPV of CT for detecting pelvic LN metastases were 41.17% (95% CI:18-67%), 100% (95% CI:90-100%) 100% (95% CI:59-100%) and 78.26% (95% CI:64-89%) respectively. Sensitivity, specificity, PPV and NPV of 18F-FDG-PET for detecting pelvic LN metastases were 62.5% (95% CI:35-85%), 83.78% (95% CI:68-94%), 62.5% (95% CI:35-85%), and 83.78% (95% CI:68-94%) respectively. Pre-operative staging with 18F-FDG-PET identified the distant metastatic disease in 9/76 (11.8%) patients who were occult on CT. This retrospective study suggested that 18F-FDG-PET may be more sensitive than CT for detecting pelvic LN metastases. 7/76 (9.2%) patients avoided cystectomy due to 18F-FDG-PET diagnosed metastases that were not reported on CT. Conclusion: 18F-FDG-PET is more sensitive than CT for pelvic LN metastases, which can be used as the standard modality of bladder cancer staging, as it may change the treatment by detecting lymph node metastasis that was occult in CT. Further research involving randomised controlled trials comparing the diagnostic yield of 18F-FDG-PET and CT in detecting nodal and distant metastasis in UC or BC is warranted to confirm our findings.

Keywords: FDG PET, CT scan, urothelial cancer, bladder cancer

Procedia PDF Downloads 104
411 Determination of Circulating Tumor Cells in Breast Cancer Patients by Electrochemical Biosensor

Authors: Gökçe Erdemir, İlhan Yaylım, Serap Erdem-Kuruca, Musa Mutlu Can

Abstract:

It has been determined that the main reason for the death of cancer disease is caused by metastases rather than the primary tumor. The cells that leave the primary tumor and enter the circulation and cause metastasis in the secondary organs are called "circulating tumor cells" (CTCs). The presence and number of circulating tumor cells has been associated with poor prognosis in many major types of cancer, including breast, prostate, and colorectal cancer. It is thought that knowledge of circulating tumor cells, which are seen as the main cause of cancer-related deaths due to metastasis, plays a key role in the diagnosis and treatment of cancer. The fact that tissue biopsies used in cancer diagnosis and follow-up are an invasive method and are insufficient in understanding the risk of metastasis and the progression of the disease have led to new searches. Liquid biopsy tests performed with a small amount of blood sample taken from the patient for the detection of CTCs are easy and reliable, as well as allowing more than one sample to be taken over time to follow the prognosis. However, since these cells are found in very small amounts in the blood, it is very difficult to capture them and specially designed analytical techniques and devices are required. Methods based on the biological and physical properties of the cells are used to capture these cells in the blood. Early diagnosis is very important in following the prognosis of tumors of epithelial origin such as breast, lung, colon and prostate. Molecules such as EpCAM, vimentin, and cytokeratins are expressed on the surface of cells that pass into the circulation from very few primary tumors and reach secondary organs from the circulation, and are used in the diagnosis of cancer in the early stage. For example, increased EpCAM expression in breast and prostate cancer has been associated with prognosis. These molecules can be determined in some blood or body fluids to be taken from patients. However, more sensitive methods are required to be able to determine when they are at a low level according to the course of the disease. The aim is to detect these molecules found in very few cancer cells with the help of sensitive, fast-sensing biosensors, first in breast cancer cells reproduced in vitro and then in blood samples taken from breast cancer patients. In this way, cancer cells can be diagnosed early and easily and effectively treated.

Keywords: electrochemical biosensors, breast cancer, circulating tumor cells, EpCAM, Vimentin, Cytokeratins

Procedia PDF Downloads 243
410 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment

Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin

Abstract:

Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).

Keywords: mininet, OpenFlow, POX controller, SDN

Procedia PDF Downloads 212
409 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 79
408 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources

Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan

Abstract:

Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.

Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization

Procedia PDF Downloads 172
407 Update on Epithelial Ovarian Cancer (EOC), Types, Origin, Molecular Pathogenesis, and Biomarkers

Authors: Salina Yahya Saddick

Abstract:

Ovarian cancer remains the most lethal gynecological malignancy due to the lack of highly sensitive and specific screening tools for detection of early-stage disease. The OSE provides the progenitor cells for 90% of human ovarian cancers. Recent morphologic, immunohistochemical and molecular genetic studies have led to the development of a new paradigm for the pathogenesis and origin of epithelial ovarian cancer (EOC) based on a ualistic model of carcinogenesis that divides EOC into two broad categories designated Types I and II which are characterized by specific mutations, including KRAS, BRAF, ERBB2, CTNNB1, PTEN PIK3CA, ARID1A, and PPPR1A, which target specific cell signaling pathways. Type 1 tumors rarely harbor TP53. type I tumors are relatively genetically stable and typically display a variety of somatic sequence mutations that include KRAS, BRAF, PTEN, PIK3CA CTNNB1 (the gene encoding beta catenin), ARID1A and PPP2R1A but very rarely TP53 . The cancer stem cell (CSC) hypothesis postulates that the tumorigenic potential of CSCs is confined to a very small subset of tumor cells and is defined by their ability to self-renew and differentiate leading to the formation of a tumor mass. Potential protein biomarker miRNA, are promising biomarkers as they are remarkably stable to allow isolation and analysis from tissues and from blood in which they can be found as free circulating nucleic acids and in mononuclear cells. Recently, genomic anaylsis have identified biomarkers and potential therapeutic targets for ovarian cancer namely, FGF18 which plays an active role in controlling migration, invasion, and tumorigenicity of ovarian cancer cells through NF-κB activation, which increased the production of oncogenic cytokines and chemokines. This review summarizes update information on epithelial ovarian cancers and point out to the most recent ongoing research.

Keywords: epithelial ovarian cancers, somatic sequence mutations, cancer stem cell (CSC), potential protein, biomarker, genomic analysis, FGF18 biomarker

Procedia PDF Downloads 359
406 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 333
405 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study

Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.

Keywords: balance, joint position sense, proprioception, whole body vibration

Procedia PDF Downloads 313
404 Levels of Heavy Metals and Arsenic in Sediment and in Clarias Gariepinus, of Lake Ngami

Authors: Nashaat Mazrui, Oarabile Mogobe, Barbara Ngwenya, Ketlhatlogile Mosepele, Mangaliso Gondwe

Abstract:

Over the last several decades, the world has seen a rapid increase in activities such as deforestation, agriculture, and energy use. Subsequently, trace elements are being deposited into our water bodies, where they can accumulate to toxic levels in aquatic organisms and can be transferred to humans through fish consumption. Thus, though fish is a good source of essential minerals and omega-3 fatty acids, it can also be a source of toxic elements. Monitoring trace elements in fish is important for the proper management of aquatic systems and the protection of human health. The aim of this study was to determine concentrations of trace elements in sediment and muscle tissues of Clarias gariepinus at Lake Ngami, in the Okavango Delta in northern Botswana, during low floods. The fish were bought from local fishermen, and samples of muscle tissue were acid-digested and analyzed for iron, zinc, copper, manganese, molybdenum, nickel, chromium, cadmium, lead, and arsenic using inductively coupled plasma optical emission spectroscopy (ICP-OES). Sediment samples were also collected and analyzed for the elements and for organic matter content. Results show that in all samples, iron was found in the greatest amount while cadmium was below the detection limit. Generally, the concentrations of elements in sediment were higher than in fish except for zinc and arsenic. While the concentration of zinc was similar in the two media, arsenic was almost 3 times higher in fish than sediment. To evaluate the risk to human health from fish consumption, the target hazard quotient (THQ) and cancer risk for an average adult in Botswana, sub-Saharan Africa, and riparian communities in the Okavango Delta was calculated for each element. All elements were found to be well below regulatory limits and do not pose a threat to human health except arsenic. The results suggest that other benthic feeding fish species could potentially have high arsenic levels too. This has serious implications for human health, especially riparian households to whom fish is a key component of food and nutrition security.

Keywords: Arsenic, African sharp tooth cat fish, Okavango delta, trace elements

Procedia PDF Downloads 174
403 Digital Immunity System for Healthcare Data Security

Authors: Nihar Bheda

Abstract:

Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.

Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology

Procedia PDF Downloads 51
402 Surface Enhanced Infrared Absorption for Detection of Ultra Trace of 3,4- Methylene Dioxy- Methamphetamine (MDMA)

Authors: Sultan Ben Jaber

Abstract:

Optical properties of molecules exhibit dramatic changes when adsorbed close to nano-structure metallic surfaces such as gold and silver nanomaterial. This phenomena opened a wide range of research to improve conventional spectroscopies efficiency. A well-known technique that has an intensive focus of study is surface-enhanced Raman spectroscopy (SERS), as since the first observation of SERS phenomena, researchers have published a great number of articles about the potential mechanisms behind this effect as well as developing materials to maximize the enhancement. Infrared and Raman spectroscopy are complementary techniques; thus, surface-enhanced infrared absorption (SEIRA) also shows a noticeable enhancement of molecules in the mid-IR excitation on nonmetallic structure substrates. In the SEIRA, vibrational modes that gave change in dipole moments perpendicular to the nano-metallic substrate enhanced 200 times greater than the free molecule’s modes. SEIRA spectroscopy is promising for the characterization and identification of adsorbed molecules on metallic surfaces, especially at trace levels. IR reflection-absorption spectroscopy (IRAS) is a well-known technique for measuring IR spectra of adsorbed molecules on metallic surfaces. However, SEIRA spectroscopy sensitivity is up to 50 times higher than IRAS. SEIRA enhancement has been observed for a wide range of molecules adsorbed on metallic substrates such as Au, Ag, Pd, Pt, Al, and Ni, but Au and Ag substrates exhibited the highest enhancement among the other mentioned substrates. In this work, trace levels of 3,4-methylenedioxymethamphetamine (MDMA) have been detected using gold nanoparticles (AuNPs) substrates with surface-enhanced infrared absorption (SEIRA). AuNPs were first prepared and washed, then mixed with different concentrations of MDMA samples. The process of fabricating the substrate prior SEIRA measurements included mixing of AuNPs and MDMA samples followed by vigorous stirring. The stirring step is particularly crucial, as stirring allows molecules to be robustly adsorbed on AuNPs. Thus, remarkable SEIRA was observed for MDMA samples even at trace levels, showing the rigidity of our approach to preparing SEIRA substrates.

Keywords: surface-enhanced infrared absorption (SEIRA), gold nanoparticles (AuNPs), amphetamines, methylene dioxy- methamphetamine (MDMA), enhancement factor

Procedia PDF Downloads 51
401 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions

Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira

Abstract:

Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).

Keywords: change management, technology acceptance model, organizational change, health and safety

Procedia PDF Downloads 27
400 The Investigation of Endogenous Intoxication and Lipid Peroxidation in Patients with Giardiasis Before and After Treatment

Authors: R. H. Begaydarova, B. Zh. Kultanov, B. T. Esilbaeva, G. E. Nasakaeva, Y. Yukhnevich, G. K. Alshynbekova, A. E. Dyusembaeva

Abstract:

Background: The level of middle molecules of peptides (MMP) allows to evaluate the severity and prognosis of the disease and is a criterion for the effectiveness of the treatment. The detection the products of lipidperoxidation cascade, such as conjugated dienes, malondialdehyde in biological material, has an important role in the development of pathogenesis, the diagnosis and prognosis in different parasitic diseases. Purpose of the study was to evaluate the state of endogenous intoxication and indicators of lipid peroxidation in patients with giardiasis before and after treatment. Materials and methods: Endogenous intoxication was evaluated in patients with giardiasis in the level of middle molecules of peptides (MMP) in the blood. The amount of MMP and products of lipid peroxidation were determined in the blood of 198 patients with giardiasis, 129 of them were women (65%), 69 were men (35%). The MMP level was detected for comparison in the blood of 84 healthy volunteers. The lipid peroxidation were determined in 40 healthy men and women without giardiasis and history of chronic diseases. Data were processed by conventional methods of variation statistics, we calculated the arithmetic mean (M) and standard dispersion (m). t-test (t) was used to assess differences. Results: The level of MMP in the blood was significantly higher in patients with giardiasis in comparison with group of healthy men and women. MMP concentration in the blood of women with Giardia was 2.5 times greater than that of the comparison groups of women. The level of MMP exceeds more than 6 times in men with giardiasis. The decrease in the intensity of endogenous intoxication was two weeks after antigiardia therapy, both men and women. According to the study, a statistically significant increase in the level of all the studied parameters lipid peroxidation cascade was observed in the blood of men with giardiasis, with the exception of the total primary production (NGN). The treatment of giardiasis helped to stabilize the level of almost all metabolites of lipid peroxidation cascade. The exception was level of malondialdehyde, it was significantly elevated to compare with the control group and after treatment. Conclusion: Thus, the MMP level was significantly higher in blood of patients with giardiasis than in comparison group. This is evidence of severe endogenous intoxication caused by giardia infection. The accumulation of primary and secondary products of lipid peroxidation was observed in the blood of men and women. These processes tend to be more active in men than in women. Antigiardiasis therapy contributed to the normalization of almost all the studied indicators of lipid peroxidation in the blood of participants, except the level malondialdehyde in the blood of men.

Keywords: enzymes of antioxidant protection, giardiasis, blood, treatment

Procedia PDF Downloads 220
399 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa

Authors: Jean Paul M. Milambo

Abstract:

Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.

Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors

Procedia PDF Downloads 60
398 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)

Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim

Abstract:

Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).

Keywords: pineapple, diagnosis, virus, NDVI

Procedia PDF Downloads 772
397 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein

Authors: Priyal Chikhaliwala, Sudeshna Chandra

Abstract:

Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.

Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles

Procedia PDF Downloads 124
396 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate

Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki

Abstract:

Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.

Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion

Procedia PDF Downloads 298
395 Dynamics Pattern of Land Use and Land Cover Change and Its Driving Factors Based on a Cellular Automata Markov Model: A Case Study at Ibb Governorate, Yemen

Authors: Abdulkarem Qasem Dammag, Basema Qasim Dammag, Jian Dai

Abstract:

Change in Land use and Land cover (LU/LC) has a profound impact on the area's natural, economic, and ecological development, and the search for drivers of land cover change is one of the fundamental issues of LU/LC change. The study aimed to assess the temporal and Spatio-temporal dynamics of LU/LC in the past and to predict the future using Landsat images by exploring the characteristics of different LU/LC types. Spatio-temporal patterns of LU/LC change in Ibb Governorate, Yemen, were analyzed based on RS and GIS from 1990, 2005, and 2020. A socioeconomic survey and key informant interviews were used to assess potential drivers of LU/LC. The results showed that from 1990 to 2020, the total area of vegetation land decreased by 5.3%, while the area of barren land, grassland, built-up area, and waterbody increased by 2.7%, 1.6%, 1.04%, and 0.06%, respectively. Based on socio-economic surveys and key informant interviews, natural factors had a significant and long-term impact on land change. In contrast, site construction and socio-economic factors were the main driving forces affecting land change in a short time scale. The analysis results have been linked to the CA-Markov Land Use simulation and forecasting model for the years 2035 and 2050. The simulation results revealed from the period 2020 to 2050, the trend of dynamic changes in land use, where the total area of barren land decreased by 7.0% and grassland by 0.2%, while the vegetation land, built-up area, and waterbody increased by 4.6%, 2.6%, and 0.1 %, respectively. Overall, these findings provide LULC's past and future trends and identify drivers, which can play an important role in sustainable land use planning and management by balancing and coordinating urban growth and land use and can also be used at the regional level in different levels to provide as a reference. In addition, the results provide scientific guidance to government departments and local decision-makers in future land-use planning through dynamic monitoring of LU/LC change.

Keywords: LU/LC change, CA-Markov model, driving forces, change detection, LU/LC change simulation

Procedia PDF Downloads 38
394 Direct Current Electric Field Stimulation against PC12 Cells in 3D Bio-Reactor to Enhance Axonal Extension

Authors: E. Nakamachi, S. Tanaka, K. Yamamoto, Y. Morita

Abstract:

In this study, we developed a three-dimensional (3D) direct current electric field (DCEF) stimulation bio-reactor for axonal outgrowth enhancement to generate the neural network of the central nervous system (CNS). By using our newly developed 3D DCEF stimulation bio-reactor, we cultured the rat pheochromocytoma cells (PC12) and investigated the effects on the axonal extension enhancement and network generation. Firstly, we designed and fabricated a 3D bio-reactor, which can load DCEF stimulation on PC12 cells embedded in the collagen gel as extracellular environment. The connection between the electrolyte and the medium using salt bridges for DCEF stimulation was introduced to avoid the cell death by the toxicity of metal ion. The distance between the salt bridges was adopted as the design variable to optimize a structure for uniform DCEF stimulation, where the finite element (FE) analyses results were used. Uniform DCEF strength and electric flux vector direction in the PC12 cells embedded in collagen gel were examined through measurements of the fabricated 3D bio-reactor chamber. Measurement results of DCEF strength in the bio-reactor showed a good agreement with FE results. In addition, the perfusion system was attached to maintain pH 7.2 ~ 7.6 of the medium because pH change was caused by DCEF stimulation loading. Secondly, we disseminated PC12 cells in collagen gel and carried out 3D culture. Finally, we measured the morphology of PC12 cell bodies and neurites by the multiphoton excitation fluorescence microscope (MPM). The effectiveness of DCEF stimulation to enhance the axonal outgrowth and the neural network generation was investigated. We confirmed that both an increase of mean axonal length and axogenesis rate of PC12, which have been exposed 5 mV/mm for 6 hours a day for 4 days in the bioreactor. We found following conclusions in our study. 1) Design and fabrication of DCEF stimulation bio-reactor capable of 3D culture nerve cell were completed. A uniform electric field strength of average value of 17 mV/mm within the 1.2% error range was confirmed by using FE analyses, after the structure determination through the optimization process. In addition, we attached a perfusion system capable of suppressing the pH change of the culture solution due to DCEF stimulation loading. 2) Evaluation of DCEF stimulation effects on PC12 cell activity was executed. The 3D culture of PC 12 was carried out adopting the embedding culture method using collagen gel as a scaffold for four days under the condition of 5.0 mV/mm and 10mV/mm. There was a significant effect on the enhancement of axonal extension, as 11.3% increase in an average length, and the increase of axogenesis rate. On the other hand, no effects on the orientation of axon against the DCEF flux direction was observed. Further, the network generation was enhanced to connect longer distance between the target neighbor cells by DCEF stimulation.

Keywords: PC12, DCEF stimulation, 3D bio-reactor, axonal extension, neural network generation

Procedia PDF Downloads 173
393 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers

Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin

Abstract:

Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.

Keywords: anxiety, emotional valence, childhood, lexical access

Procedia PDF Downloads 276
392 Prevalence of Chlamydia Trachomatis Infection in Multiple Anatomical Sites among Patients at Stis Center, Thailand

Authors: Siwimol Phoomniyom, Pathom Karaipoom, Rossaphorn Kittyaowaman

Abstract:

Background: C. trachomatis is the most common bacterial sexually transmitted infections. Although infection with C. trachomatis can be treated with antibiotic, it is frequently asymptomatic, especially in extragenital sites. Hence, if screening tests are not performed, undetected and untreated is a crucial problem for C. trachomatis infection, especially in Thailand, which is less well studied. We sought to assess the prevalence of C. trachomatis infection in multiple anatomical sites among patients attending Bangrak STIs Center. Methods: We examined laboratory results of all patients at baseline visit from 3 January 2018 to 27 December 2019. These results were tested by a validated in-house real time PCR specify for the cryptic plasmid gene of C. trachomatis. The prevalence of C. trachomatis was analyzed by anatomical sites, sexes, and ages. Urogenital samples were obtained from urethral swab of men and cervical swab of women. The median ages of the patients were 32 years (range 13-89 years). Chi-square test by IBM SPSS statistic version 20 was used to assess difference in the distribution of variables between groups. Results: Among 3,789 patients, the prevalence for C. trachomatis infection was the highest in rectal (16.1%), followed by urogenital (11.2%) and pharyngeal (3.5%) sites. Rectal and urogenital infection in men was higher than in women, with the highest prevalence of 16.6% in rectal site. Both rectal and urogenital sites also showed statistically significant differences between sexes (P<0.001). Meanwhile, pharyngeal C. trachomatis infection rate was higher in women than men. Interestingly, the chlamydia prevalence was the highest in age 13-19 years of all three sites (18.5%, urogenital; 17.7%, rectal; 6.5%, pharyngeal), with statistically significant difference between age groups (P<0.001). Total of 45 C. trachomatis infections, 20.0%, 51.1%, and 6.7% were isolated from urogenital, rectal, and pharyngeal sites. In total, 75.6%, 26.7%, and 80.0% of chlamydia infections would have been missed, if only urogenital, rectal, or pharyngeal screening was performed. Conclusions: The highest source of C. trachomatis infection was the rectal site. While, the highest prevalence in men was at rectal site, that in women was at urogenital site. The highest chlamydia prevalence was found in adolescent age group, indicating that the pediatric population was a high-risk group. This finding also elucidated that a high proportion of C. trachomatis infection would be missed, if only single anatomical site screening was performed, especially in extragenital sites. Hence, extragenital screening is also required for the extensive C. trachomatis detection.

Keywords: chlamydia trachomatis, anatomical sites, sexes, ages

Procedia PDF Downloads 53
391 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 158
390 Evaluation of Ocular Changes in Hypertensive Disorders of Pregnancy

Authors: Rajender Singh, Nidhi Sharma, Aastha Chauhan, Meenakshi Barsaul, Jyoti Deswal, Chetan Chhikara

Abstract:

Introduction: Pre-eclampsia and eclampsia are hypertensive disorders of pregnancy with multisystem involvement and are common causes of morbidity and mortality in obstetrics. It is believed that changes in retinal arterioles may indicate similar changes in the placenta. Therefore, this study was undertaken to evaluate the ocular manifestations in cases of pre-eclampsia and eclampsia and to deduce any association between the retinal changes and blood pressure, the severity of disease, gravidity, proteinuria, and other lab parameters so that a better approach could be devised to ensure maternal and fetal well-being. Materials and Methods: This was a hospital-based cross-sectional study conducted over a period of one year, from April 2021 to May 2022. 350 admitted patients with diagnosed pre-eclampsia, eclampsia, and pre-eclampsia superimposed on chronic hypertension were included in the study. A pre-structured proforma was used. After taking consent and ocular history, a bedside examination to record visual acuity, pupillary size, corneal curvature, field of vision, and intraocular pressure was done. Dilated fundus examination was done with a direct and indirect ophthalmoscope. Age, parity, BP, proteinuria, platelet count, liver and kidney function tests were noted down. The patients with positive findings only were followed up after 72 hours and 6 weeks of termination of pregnancy. Results: The mean age of patients was 26.18±4.33 years (range 18-39 years).157 (44.9%) were primigravida while 193(55.1%) were multigravida.53 (15.1%) patients had eclampsia, 128(36.5%) had mild pre-eclampsia,128(36.5%) had severe pre-eclampsia and 41(11.7%) had chronic hypertension with superimposed pre-eclampsia. Retinal changes were found in 208 patients (59.42%), and grade I changes were the most common. 82(23.14%) patients had grade I changes, 75 (21.4%) had grade II changes, 41(11.71%) had grade III changes, and 11(3.14%) had serous retinal detachment/grade IV changes. 36 patients had unaided visual acuity <6/9, of these 17 had refractive error and 19(5.4%) had varying degrees of retinal changes. 3(0.85%) out of 350 patients had an abnormal field of vision in both eyes. All 3 of them had eclampsia and bilateral exudative retinal detachment. At day 4, retinopathy in 10 patients resolved, and 3 patients had improvement in visual acuity. At 6 weeks, retinopathy in all the patients resolved spontaneously except persistence of grade II changes in 23 patients with chronic hypertension with superimposed pre-eclampsia, while visual acuity and field of vision returned to normal in all patients. Pupillary size, intraocular pressure, and corneal curvature were found to be within normal limits at all times of examination. There was a statistically significant positive association between retinal changes and mean arterial pressure. The study showed a positive correlation between fundus findings and severity of disease (p value<0.05) and mean arterial pressure (p value<0.005). Primigravida had more retinal changes than multigravida patients. A significant association was found between fundus changes and thrombocytopenia and deranged liver and kidney function tests (p value<0.005). Conclusion: As the severity of pre-eclampsia and eclampsia increases, the incidence of retinopathy also increases, and it affects visual acuity and visual fields of the patients. Thus, timely ocular examination should be done in all such cases to prevent complications.

Keywords: eclampsia, hypertensive, ocular, pre-eclampsia

Procedia PDF Downloads 61
389 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries

Authors: Behzad Babaei, B. Gangadhara Prusty

Abstract:

The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.

Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress

Procedia PDF Downloads 87
388 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 128
387 Abilitest Battery: Presentation of Tests and Psychometric Properties

Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański

Abstract:

Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.

Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory

Procedia PDF Downloads 90
386 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 60
385 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 331