Search results for: seismic response feature
5675 An Approach to Low Velocity Impact Damage Modelling of Variable Stiffness Curved Composite Plates
Authors: Buddhi Arachchige, Hessam Ghasemnejad
Abstract:
In this study, the post impact behavior of curved composite plates subjected to low velocity impact was studied analytically and numerically. Approaches to damage modelling are proposed through the degradation of stiffness in the damaged region by reduction of thickness in the damage region. Spring-mass models were used to model the impact response of the plate and impactor. The study involved designing two damage models to compare and contrast the model best fitted with the numerical results. The theoretical force-time responses were compared with the numerical results obtained through a detailed study carried out in LS-DYNA. The modified damage model established a good prediction with the analytical force-time response for different layups and geometry. This study provides a gateway in selecting the most effective layups for variable stiffness curved composite panels able to withstand a higher impact damage.Keywords: analytical modelling, composite damage, impact, variable stiffness
Procedia PDF Downloads 2775674 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1225673 Prediction Modeling of Compression Properties of a Knitted Sportswear Fabric Using Response Surface Method
Authors: Jawairia Umar, Tanveer Hussain, Zulfiqar Ali, Muhammad Maqsood
Abstract:
Different knitted structures and knitted parameters play a vital role in the stretch and recovery management of compression sportswear in addition to the materials use to generate this stretch and recovery behavior of the fabric. The present work was planned to predict the different performance indicators of a compression sportswear fabric with some ground parameters i.e. base yarn stitch length (polyester as base yarn and spandex as plating yarn involve to make a compression fabric) and linear density of the spandex which is a key material of any sportswear fabric. The prediction models were generated by response surface method for performance indicators such as stretch & recovery percentage, compression generated by the garment on body, total elongation on application of high power force and load generated on certain percentage extension in fabric. Certain physical properties of the fabric were also modeled using these two parameters.Keywords: Compression, sportswear, stretch and recovery, statistical model, kikuhime
Procedia PDF Downloads 3795672 Chaotic Response of Electrical Insulation System with Gaseous Dielectric under High AC and DC Voltages
Authors: Arijit Basuray
Abstract:
It is well known that if an electrical insulation system is stressed under high voltage then discharge may occur in various form and if the system is made of composite dielectric having interfaces of materials having different dielectric constant discharge may occur due to gross mismatch of dielectric constant causing intense local field in the interfaces. Here author has studied, firstly, behavior of discharges in gaseous dielectric circuit under AC and DC voltages. A gaseous dielectric circuit is made such that a pair of electrode of typical geometry is used to make the discharges occur under application of AC and DC voltages. Later on, composite insulation system with air gap is also studied. Discharge response of the dielectric circuit is measured across a typically designed impedance. The time evolution of the discharge characteristics showed some interesting chaotic behavior. Author here proposed some analysis of such behavior of the discharge pattern and discussed about the possibility of presence of such discharge circuit in lumped electric circuit.Keywords: electrical insulation system, EIS, composite dielectric, discharge, chaos
Procedia PDF Downloads 1765671 Improving Security by Using Secure Servers Communicating via Internet with Standalone Secure Software
Authors: Carlos Gonzalez
Abstract:
This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.Keywords: internet, secure software, threats, cryptography process
Procedia PDF Downloads 3335670 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid
Authors: Anindita Sen
Abstract:
Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid
Procedia PDF Downloads 595669 A Pole Radius Varying Notch Filter with Transient Suppression for Electrocardiogram
Authors: Ramesh Rajagopalan, Adam Dahlstrom
Abstract:
Noise removal techniques play a vital role in the performance of electrocardiographic (ECG) signal processing systems. ECG signals can be corrupted by various kinds of noise such as baseline wander noise, electromyographic interference, and power-line interference. One of the significant challenges in ECG signal processing is the degradation caused by additive 50 or 60 Hz power-line interference. This work investigates the removal of power line interference and suppression of transient response for filtering noise corrupted ECG signals. We demonstrate the effectiveness of Infinite Impulse Response (IIR) notch filter with time varying pole radius for improving the transient behavior. The temporary change in the pole radius of the filter diminishes the transient behavior. Simulation results show that the proposed IIR filter with time varying pole radius outperforms traditional IIR notch filters in terms of mean square error and transient suppression.Keywords: notch filter, ECG, transient, pole radius
Procedia PDF Downloads 3775668 Online Dietary Management System
Authors: Kyle Yatich Terik, Collins Oduor
Abstract:
The current healthcare system has made healthcare more accessible and efficient by the use of information technology through the implementation of computer algorithms that generate menus based on the diagnosis. While many systems just like these have been created over the years, their main objective is to help healthy individuals calculate their calorie intake and assist them by providing food selections based on a pre-specified calorie. That application has been proven to be useful in some ways, and they are not suitable for monitoring, planning, and managing hospital patients, especially that critical condition their dietary needs. The system also addresses a number of objectives, such as; the main objective is to be able to design, develop and implement an efficient, user-friendly as well as and interactive dietary management system. The specific design development objectives include developing a system that will facilitate a monitoring feature for users using graphs, developing a system that will provide system-generated reports to the users, dietitians, and system admins, design a system that allows users to measure their BMI (Body Mass Index), the system will also provide food template feature that will guide the user on a balanced diet plan. In order to develop the system, further research was carried out in Kenya, Nairobi County, using online questionnaires being the preferred research design approach. From the 44 respondents, one could create discussions such as the major challenges encountered from the manual dietary system, which include no easily accessible information of the calorie intake for food products, expensive to physically visit a dietitian to create a tailored diet plan. Conclusively, the system has the potential of improving the quality of life of people as a whole by providing a standard for healthy living and allowing individuals to have readily available knowledge through food templates that will guide people and allow users to create their own diet plans that consist of a balanced diet.Keywords: DMS, dietitian, patient, administrator
Procedia PDF Downloads 1615667 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 965666 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 755665 Estimation of the Effect of Initial Damping Model and Hysteretic Model on Dynamic Characteristics of Structure
Authors: Shinji Ukita, Naohiro Nakamura, Yuji Miyazu
Abstract:
In considering the dynamic characteristics of structure, natural frequency and damping ratio are useful indicator. When performing dynamic design, it's necessary to select an appropriate initial damping model and hysteretic model. In the linear region, the setting of initial damping model influences the response, and in the nonlinear region, the combination of initial damping model and hysteretic model influences the response. However, the dynamic characteristics of structure in the nonlinear region remain unclear. In this paper, we studied the effect of setting of initial damping model and hysteretic model on the dynamic characteristics of structure. On initial damping model setting, Initial stiffness proportional, Tangent stiffness proportional, and Rayleigh-type were used. On hysteretic model setting, TAKEDA model and Normal-trilinear model were used. As a study method, dynamic analysis was performed using a lumped mass model of base-fixed. During analysis, the maximum acceleration of input earthquake motion was gradually increased from 1 to 600 gal. The dynamic characteristics were calculated using the ARX model. Then, the characteristics of 1st and 2nd natural frequency and 1st damping ratio were evaluated. Input earthquake motion was simulated wave that the Building Center of Japan has published. On the building model, an RC building with 30×30m planes on each floor was assumed. The story height was 3m and the maximum height was 18m. Unit weight for each floor was 1.0t/m2. The building natural period was set to 0.36sec, and the initial stiffness of each floor was calculated by assuming the 1st mode to be an inverted triangle. First, we investigated the difference of the dynamic characteristics depending on the difference of initial damping model setting. With the increase in the maximum acceleration of the input earthquake motions, the 1st and 2nd natural frequency decreased, and the 1st damping ratio increased. Then, in the natural frequency, the difference due to initial damping model setting was small, but in the damping ratio, a significant difference was observed (Initial stiffness proportional≒Rayleigh type>Tangent stiffness proportional). The acceleration and the displacement of the earthquake response were largest in the tangent stiffness proportional. In the range where the acceleration response increased, the damping ratio was constant. In the range where the acceleration response was constant, the damping ratio increased. Next, we investigated the difference of the dynamic characteristics depending on the difference of hysteretic model setting. With the increase in the maximum acceleration of the input earthquake motions, the natural frequency decreased in TAKEDA model, but in Normal-trilinear model, the natural frequency didn’t change. The damping ratio in TAKEDA model was higher than that in Normal-trilinear model, although, both in TAKEDA model and Normal-trilinear model, the damping ratio increased. In conclusion, in initial damping model setting, the tangent stiffness proportional was evaluated the most. In the hysteretic model setting, TAKEDA model was more appreciated than the Normal-trilinear model in the nonlinear region. Our results would provide useful indicator on dynamic design.Keywords: initial damping model, damping ratio, dynamic analysis, hysteretic model, natural frequency
Procedia PDF Downloads 1785664 A Case of Prosthetic Vascular-Graft Infection Due to Mycobacterium fortuitum
Authors: Takaaki Nemoto
Abstract:
Case presentation: A 69-year-old Japanese man presented with a low-grade fever and fatigue that had persisted for one month. The patient had an aortic dissection on the aortic arch 13 years prior, an abdominal aortic aneurysm seven years prior, and an aortic dissection on the distal aortic arch one year prior, which were all treated with artificial blood-vessel replacement surgery. Laboratory tests revealed an inflammatory response (CRP 7.61 mg/dl), high serum creatinine (Cr 1.4 mg/dL), and elevated transaminase (AST 47 IU/L, ALT 45 IU/L). The patient was admitted to our hospital on suspicion of prosthetic vascular graft infection. Following further workups on the inflammatory response, an enhanced chest computed tomography (CT) and a non-enhanced chest DWI (MRI) were performed. The patient was diagnosed with a pulmonary fistula and a prosthetic vascular graft infection on the distal aortic arch. After admission, the patient was administered Ceftriaxion and Vancomycine for 10 days, but his fever and inflammatory response did not improve. On day 13 of hospitalization, a lung fistula repair surgery and an omental filling operation were performed, and Meropenem and Vancomycine were administered. The fever and inflammatory response continued, and therefore we took repeated blood cultures. M. fortuitum was detected in a blood culture on day 16 of hospitalization. As a result, we changed the treatment regimen to Amikacin (400 mg/day), Meropenem (2 g/day), and Cefmetazole (4 g/day), and the fever and inflammatory response began to decrease gradually. We performed a test of sensitivity for Mycobacterium fortuitum, and found that the MIC was low for fluoroquinolone antibacterial agent. The clinical course was good, and the patient was discharged after a total of 8 weeks of intravenous drug administration. At discharge, we changed the treatment regimen to Levofloxacin (500 mg/day) and Clarithromycin (800 mg/day), and prescribed these two drugs as a long life suppressive therapy. Discussion: There are few cases of prosthetic vascular graft infection caused by mycobacteria, and a standard therapy remains to be established. For prosthetic vascular graft infections, it is ideal to provide surgical and medical treatment in parallel, but in this case, surgical treatment was difficult and, therefore, a conservative treatment was chosen. We attempted to increase the treatment success rate of this refractory disease by conducting a susceptibility test for mycobacteria and treating with different combinations of antimicrobial agents, which was ultimately effective. With our treatment approach, a good clinical course was obtained and continues at the present stage. Conclusion: Although prosthetic vascular graft infection resulting from mycobacteria is a refractory infectious disease, it may be curative to administer appropriate antibiotics based on the susceptibility test in addition to surgical treatment.Keywords: prosthetic vascular graft infection, lung fistula, Mycobacterium fortuitum, conservative treatment
Procedia PDF Downloads 1565663 Characterization of Himalayan Phyllite with Reference to Foliation Planes
Authors: Divyanshoo Singh, Hemant Kumar Singh, Kumar Nilankar
Abstract:
Major engineering constructions and foundations (e.g., dams, tunnels, bridges, underground caverns, etc.) in and around the Himalayan region of Uttarakhand are not only confined within hard and crystalline rocks but also stretched within weak and anisotropic rocks. While constructing within such anisotropic rocks, engineers more often encounter geotechnical complications such as structural instability, slope failure, and excessive deformation. These severities/complexities arise mainly due to inherent anisotropy such as layering/foliations, preferred mineral orientations, and geo-mechanical anisotropy present within rocks and vary when measured in different directions. Of all the inherent anisotropy present within the rocks, major geotechnical complexities mainly arise due to the inappropriate orientation of weak planes (bedding/foliation). Thus, Orientations of such weak planes highly affect the fracture patterns, failure mechanism, and strength of rocks. This has led to an improved understanding of the physico-mechanical behavior of anisotropic rocks with different orientations of weak planes. Therefore, in this study, block samples of phyllite belonging to the Chandpur Group of Lesser Himalaya were collected from the Srinagar area of Uttarakhand, India, to investigate the effect of foliation angles on physico-mechanical properties of the rock. Further, collected block samples were core drilled of diameter 50 mm at different foliation angles, β (angle between foliation plane and drilling direction), i.e., 0⁰, 30⁰, 60⁰, and 90⁰, respectively. Before the test, drilled core samples were oven-dried at 110⁰C to achieve uniformity. Physical and mechanical properties such as Seismic wave velocity, density, uniaxial compressive strength (UCS), point load strength (PLS), and Brazilian tensile strength (BTS) test were carried out on prepared core specimens. The results indicate that seismic wave velocities (P-wave and S-wave) decrease with increasing β angle. As the β angle increases, the number of foliation planes that the wave needs to pass through increases and thus causes the dissipation of wave energy with increasing β. Maximum strength for UCS, PLS, and BTS was found to be at β angle of 90⁰. However, minimum strength for UCS and BTS was found to be at β angle of 30⁰, which differs from PLS, where minimum strength was found at 0⁰ β angle. Furthermore, failure modes also correspond to the strength of the rock, showing along foliation and non-central failure as characteristics of low strength values, while multiple fractures and central failure as characteristics of high strength values. Thus, this study will provide a better understanding of the anisotropic features of phyllite for the purpose of major engineering construction and foundations within the Himalayan Region.Keywords: anisotropic rocks, foliation angle, Physico-mechanical properties, phyllite, Himalayan region
Procedia PDF Downloads 595662 Transcriptome Analysis of Dry and Soaked Tomato (Solanum lycopersicum) Seeds in Response to Fast Neutron Irradiation
Authors: Yujie Zhou, Hee-Seong Byun, Sang-In Bak, Eui-Joon Kil, Kyung Joo Min, Vivek Chavan, Won Kyong Cho, Sukchan Lee, Seung-Woo Hong, Tae-Sun Park
Abstract:
Fast neutron irradiation (FNI) can cause mutations on plant genome but, in the most of cases, these irradiated plants have not shown significant characteristics phenotypically. In this study, we utilized RNA-Seq to generate a high-resolution transcriptome map of the tomato (Solanum lycopersicum) genome effected by FNI. To quantify the different transcription levels in tomato irradiated by FNI, tomato seeds were irradiated by using MC-50 cyclotron (KIRAMS, Korea) for 0, 30 and 90 minutes, respectively. To investigate the effects on the pre-soaking condition, experimental groups were divided into dry and soaked seeds, which were soaked for 8 hours before irradiation. There was no noticeable difference in the percentage germination (PG) among dry seeds, while irradiated soaked seeds have about 10 % lower PG compared to the unirradiated control group. Using whole transcriptome sequencing by HiSeq 2000, we analyzed the differential gene expression in response to different time of FNI in dry and soaked seeds. More than 1.4 million base pair reads were mapped onto the tomato reference genome and the expression pattern differences between irradiated and unirradiated seeds were assessed. In 0, 30 and 90 minutes irradiation, 12,135, 28,495 and 28,675 transcripts were generated, respectively. Gene ontology analysis suggested the different enrichment of transcripts involved in response to different FNI. The present study showed that FNI effects on plant gene expression, which can become a new parameters for evaluating the responses against FNI on plants. In addition, the comparative analysis of differentially expressed genes in D and S seeds by FNI will also give us a chance to deep explore novel candidate genes for FNI, which could be a good model system to understand the mechanisms behind the adaption of plant to space biology research.Keywords: tomato (solanum lycopersicum), fast neutron irradiation, RNA-sequence, transcriptome expression
Procedia PDF Downloads 3195661 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness
Procedia PDF Downloads 4205660 A Case Study of Response to Dual Genotype Chronic Hepatitis C/HIV Co-Infection to Fixed Dose Sofosbuvir/Ledipasvir
Authors: Tabassum Yasmin, Hamid Pahlevan
Abstract:
HIV/Hepatitis C co-infection treatments have evolved substantially and they have similar sustained virologic response rates as those of Hepatitis C monoinfected population. There are a few studies on therapy of patients with dual genotypes, especially in HIV/Hepatic C coinfected group. Most studies portrayed case reports of dual genotype chronic Hepatitis C coinfection treatment with Sofosbuvir/Ledipasvir and Ribavirin. A 79-year-old male with a history of HIV on Truvada and Isentress had chronic Hepatitis C with 1a and 2 genotypes. The patient has a history of alcohol intake for 40 years but recently stopped drinking alcohol. He has a history of intravenous drug use in the past and currently is not using any recreational drugs. Patient has Fibro score of 0.7 with Metavir score F2 to F4. AFP is 3.2. The HCV RNA is 493,034 IU/ML. The HBV viral DNA is < 1.30 (not detected). The CD4 is 687CU/MM. The FIB 4 is 3.34 with APRI index 0.717. The HIV viral load is 101 copies/ML. MRI abdomen did not show any liver abnormality. Fixed dose Sofosbuvir/Ledipasvir was used for therapy without Ribavirin. He tolerated medication except for some minor gastrointestinal side effects like abdominal bloating. He demonstrated 100% adherence rate. Patient completed 12 weeks of therapy. HCV RNA was undetectable at 4 and 12 weeks. He achieved SVR at week 12 and subsequently had undetectable RNA for 2 years. Dual genotype prevalence in chronic hepatitis C population is rare, especially in HIV/hepatic coinfection. Our case demonstrates that dual genotypic cases can still be successfully treated with Direct Acting Antiviral agents. The newer agents for therapy for pan genotypes were not available at the time the patient was being treated. We demonstrated that dual agent therapy was still able to maintain SVR in our patient.Keywords: HIV/Hepatitis C, SVR (sustained virologic response), DAA (direct active antiviral agents, dual genotype
Procedia PDF Downloads 1965659 Induction of Hsp70 and Antioxidant Status in Porcine Granulosa Cells in Response to Deoxynivalenol and Zearalenone Exposure in vitro
Authors: Marcela Capcarova, Adriana Kolesarova, Marina Medvedova, Peter Petruska, Alexander V. Sirotkin
Abstract:
The aim of this study was to determine the activity of superoxide dismutase (SOD), glutathione peroxidase (GPx), total antioxidant status (TAS) and accumulation of Hsp70 in porcine ovarian granulosa cells after deoxynivalenol (DON) and zearalenone (ZEA) exposure in vitro. Porcine ovarian granulosa cells were incubated with DON/ZEA administrations as follows: group A (10/10 ng/mL), group B (100/100 ng/mL), group C (1000/1000 ng/mL), and the control group without any additions for 24h. In this study mycotoxins developed stress reaction of porcine ovarian granulosa cells and increased accumulation of Hsp70 what resulted in increasing activities of SOD and GPx in groups with lower doses of mycotoxins. High dose of DON and ZEA had opposite effect on GPx activity than the lower doses. Slight increase in TAS of porcine granulosa cells was observed after mycotoxins exposure. These results contribute towards the understanding of cellular stress and its response.Keywords: deoxynivalenol, zearalenone, antioxidants, Hsp70, granulosa cells
Procedia PDF Downloads 2565658 Evaluating Factors Affecting Audiologists’ Diagnostic Performance in Auditory Brainstem Response Reading: Training and Experience
Authors: M. Zaitoun, S. Cumming, A. Purcell
Abstract:
This study aims to determine if audiologists' experience characteristics in ABR (Auditory Brainstem Response) reading is associated with their performance in interpreting ABR results. Fifteen ABR traces with varying degrees of hearing level were presented twice, making a total of 30. Audiologists were asked to determine the hearing threshold for each of the cases after completing a brief survey regarding their experience and training in ABR administration. Sixty-one audiologists completed all tasks. Correlations between audiologists’ performance measures and experience variables suggested significant associations (p < 0.05) between training period in ABR testing and audiologists’ performance in terms of both sensitivity and accuracy. In addition, the number of years conducting ABR testing correlated with specificity. No other correlations approached significance. While there are relatively few significant correlations between ABR performance and experience, accuracy in ABR reading is associated with audiologists’ length of experience and period of training. To improve audiologists’ performance in reading ABR results, an emphasis on the importance of training should be raised and standardized levels and period for audiologists training in ABR testing should also be set.Keywords: ABR, audiology, performance, training, experience
Procedia PDF Downloads 1665657 Comparison of FNTD and OSLD Detectors' Responses to Light Ion Beams Using Monte Carlo Simulations and Exprimental Data
Authors: M. R. Akbari, H. Yousefnia, A. Ghasemi
Abstract:
Al2O3:C,Mg fluorescent nuclear track detector (FNTD) and Al2O3:C optically stimulated luminescence detector (OSLD) are becoming two of the applied detectors in ion dosimetry. Therefore, the response of these detectors to hadron beams is highly of interest in radiation therapy (RT) using ion beams. In this study, these detectors' responses to proton and Helium-4 ion beams were compared using Monte Carlo simulations. The calculated data for proton beams were compared with Markus ionization chamber (IC) measurement (in water phantom) from M.D. Anderson proton therapy center. Monte Carlo simulations were performed via the FLUKA code (version 2011.2-17). The detectors were modeled in cylindrical shape at various depths of the water phantom without shading each other for obtaining relative depth dose in the phantom. Mono-energetic parallel ion beams in different incident energies (100 MeV/n to 250 MeV/n) were collided perpendicularly on the phantom surface. For proton beams, the results showed that the simulated detectors have over response relative to IC measurements in water phantom. In all cases, there were good agreements between simulated ion ranges in the water with calculated and experimental results reported by the literature. For proton, maximum peak to entrance dose ratio in the simulated water phantom was 4.3 compared with about 3 obtained from IC measurements. For He-4 ion beams, maximum peak to entrance ratio calculated by both detectors was less than 3.6 in all energies. Generally, it can be said that FLUKA is a good tool to calculate Al2O3:C,Mg FNTD and Al2O3:C OSLD detectors responses to therapeutic proton and He-4 ion beams. It can also calculate proton and He-4 ion ranges with a reasonable accuracy.Keywords: comparison, FNTD and OSLD detectors response, light ion beams, Monte Carlo simulations
Procedia PDF Downloads 3435656 Improving the Performances of the nMPRA Architecture by Implementing Specific Functions in Hardware
Authors: Ionel Zagan, Vasile Gheorghita Gaitan
Abstract:
Minimizing the response time to asynchronous events in a real-time system is an important factor in increasing the speed of response and an interesting concept in designing equipment fast enough for the most demanding applications. The present article will present the results regarding the validation of the nMPRA (Multi Pipeline Register Architecture) architecture using the FPGA Virtex-7 circuit. The nMPRA concept is a hardware processor with the scheduler implemented at the processor level; this is done without affecting a possible bus communication, as is the case with the other CPU solutions. The implementation of static or dynamic scheduling operations in hardware and the improvement of handling interrupts and events by the real-time executive described in the present article represent a key solution for eliminating the overhead of the operating system functions. The nMPRA processor is capable of executing a preemptive scheduling, using various algorithms without a software scheduler. Therefore, we have also presented various scheduling methods and algorithms used in scheduling the real-time tasks.Keywords: nMPRA architecture, pipeline processor, preemptive scheduling, real-time system
Procedia PDF Downloads 3685655 Social Media Marketing Efforts and Hospital Brand Equity: An Empirical Investigation
Authors: Abrar R. Al-Hasan
Abstract:
Despite the widespread use of social media by consumers and marketers, empirical research investigating their economic value in the healthcare industry still lags. This study explores the impact of the use of social media marketing efforts on a hospital's brand equity and, ultimately, consumer response. Using social media data from Twitter and Facebook, along with an online and offline survey methodology, data is analyzed using logistic regression models. A random sample of (728) residents of the Kuwaiti population is used. The results of this study found that social media marketing efforts (SMME) in terms of use and validation lead to higher hospital brand equity and in turn, patient loyalty and patient visit. The study highlights the impact of SMME on hospital brand equity and patient response. Healthcare organizations should guide their marketing efforts to better manage this new way of marketing and communicating with patients to enhance their consumer loyalty and financial performance.Keywords: brand equity, healthcare marketing, patient visit, social media, SMME
Procedia PDF Downloads 1735654 The Development, Validation, and Evaluation of the Code Blue Simulation Module in Improving the Code Blue Response Time among Nurses
Authors: Siti Rajaah Binti Sayed Sultan
Abstract:
Managing the code blue event is stressful for nurses, the patient, and the patient's families. The rapid response from the first and second responders in the code blue event will improve patient outcomes and prevent tissue hypoxia that leads to brain injury and other organ failures. Providing 1 minute for the cardiac massage and 2 minutes for defibrillation will significantly improve patient outcomes. As we know, the American Heart Association came out with guidelines for managing cardiac arrest patients. The hospital must provide competent staff to manage this situation. It can be achieved when the staff is well equipped with the skill, attitude, and knowledge to manage this situation with well-planned strategies, i.e., clear guidelines for managing the code blue event, competent staff, and functional equipment. The code blue simulation (CBS) was chosen in the training program for code blue management because it can mimic real scenarios. Having the code blue simulation module will allow the staff to appreciate what they will face during the code blue event, especially since it rarely happens in that area. This CBS module training will help the staff familiarize themselves with the activities that happened during actual events and be able to operate the equipment accordingly. Being challenged and independent in managing the code blue in the early phase gives the patient a better outcome. The CBS module will help the assessor and the hospital management team with the proper tools and guidelines for managing the code blue drill accordingly. As we know, prompt action will benefit the patient and their family. It also indirectly increases the confidence and job satisfaction among the nurses, increasing the standard of care, reducing the complication and hospital burden, and enhancing cost-effective care.Keywords: code blue simulation module, development of code blue simulation module, code blue response time, code blue drill, cardiorespiratory arrest, managing code blue
Procedia PDF Downloads 675653 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 2225652 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses
Authors: Ayon Mukherjee
Abstract:
Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability
Procedia PDF Downloads 1655651 How Hormesis Impacts Practice of Ecological Risk Assessment and Food Safety Assessment
Authors: Xiaoxian Zhang
Abstract:
Guidelines of ecological risk assessment (ERA) and food safety assessment (FSA) used nowadays, based on an S-shaped threshold dose-response curve (SDR), fail to consider hormesis, a reproducible biphasic dose-response model represented as a J-shaped or an inverted U-shaped curve, that occurs in the real-life environment across multitudinous compounds on cells, organisms, populations, and even the ecosystem. Specifically, in SDR-based ERA and FSA practice, predicted no effect concentration (PNEC) is calculated separately for individual substances from no observed effect concentration (NOEC, usually equivalent to 10% effect concentration (EC10) of a contaminant or food condiment) over an assessment coefficient that is bigger than 1. Experienced researchers doubted that hormesis in the real-life environment might lead to a waste of limited human and material resources in ERA and FSA practice, but related data are scarce. In this study, hormetic effects on bioluminescence of Aliivibrio fischeri (A. f) induced by sulfachloropyridazine (SCP) under 40 conditions to simulate the real-life scenario were investigated, and hormetic effects on growth of human MCF-7 cells caused by brown sugar and mascavado sugar were found likewise. After comparison of related parameters, it has for the first time been proved that there is a 50% probability for safe concentration (SC) of contaminants and food condiments to fall within the hormetic-stimulatory range (HSR) or left to HSR, revealing the unreliability of traditional parameters in standardized (eco)toxicological studies, and supporting qualitatively and quantitatively the over-strictness of ERA and FSA resulted from misuse of SDR. This study provides a novel perspective for ERA and FSA practitioners that hormesis should dominate and conditions where SDR works should only be singled out on a specific basis.Keywords: dose-response relationship, food safety, ecological risk assessment, hormesis
Procedia PDF Downloads 1465650 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach
Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac
Abstract:
Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, trend equations
Procedia PDF Downloads 2335649 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 2705648 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization
Authors: Ju-Hong Lee, Ding-Chen Chung
Abstract:
The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization
Procedia PDF Downloads 5215647 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 555646 Producing Graphical User Interface from Activity Diagrams
Authors: Ebitisam K. Elberkawi, Mohamed M. Elammari
Abstract:
Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through activity diagrams (AD).Keywords: activity diagram, graphical user interface, GUI components, program
Procedia PDF Downloads 464