Search results for: static torque transmission capability
1290 Kinetic Study of C₃N₄/CuWO₄: Photocatalyst towards Solar Light Inactivation of Mixed Populated Bacteria
Authors: Rimzhim Gupta, Bhanupriya Boruah, Jayant M. Modak, Giridhar Madras
Abstract:
Microbial contamination is one of the major concerns in the field of water treatment. AOP (advanced oxidation processes) is well-established method to resolve the issue of removal of contaminants in water. A Z-scheme composite g-C₃N₄/CuWO₄ was synthesized by sol-gel method for the photocatalytic inactivation of a mixed population of Gram-positive bacteria (S. aureus) and Gram-negative bacteria (E. coli). The photoinactivation was observed for different types of bacteria in the same medium together and individually in the absence of the nutrients. The lattice structures and phase purities were determined by X-ray diffraction. For morphological and topographical features, scanning electron microscopy and transmission electron microscopy analyses were carried out. The band edges of the semiconductor (valence band and conduction band) were determined by ultraviolet photoelectron microscopy. The lifetime of the charge carriers and band gap of the semiconductors were determined by time resolved florescence spectroscopy and diffused reflectance spectroscopy, respectively. The effect of weight ratio of C₃N₄ and CuWO₄ was observed by performing photocatalytic experiments. To investigate the exact mechanism and major responsible radicals for photocatalysis, scavenger studies were performed. The rate constants and order of the inactivation reactions were obtained by power law kinetics. For E. coli and S. aureus, the order of reaction and rate constants are 1.15, 0.9 and 1.39 ± 0.03 (CFU/mL)⁻⁰.¹⁵ h⁻¹, 47.95 ± 1.2 (CFU/mL)⁰.¹ h⁻¹, respectively.Keywords: z-scheme, E. coli, S. aureus, sol-gel
Procedia PDF Downloads 1481289 Service Interactions Coordination Using a Declarative Approach: Focuses on Deontic Rule from Semantics of Business Vocabulary and Rules Models
Authors: Nurulhuda A. Manaf, Nor Najihah Zainal Abidin, Nur Amalina Jamaludin
Abstract:
Coordinating service interactions are a vital part of developing distributed applications that are built up as networks of autonomous participants, e.g., software components, web services, online resources, involve a collaboration between a diverse number of participant services on different providers. The complexity in coordinating service interactions reflects how important the techniques and approaches require for designing and coordinating the interaction between participant services to ensure the overall goal of a collaboration between participant services is achieved. The objective of this research is to develop capability of steering a complex service interaction towards a desired outcome. Therefore, an efficient technique for modelling, generating, and verifying the coordination of service interactions is developed. The developed model describes service interactions using service choreographies approach and focusing on a declarative approach, advocating an Object Management Group (OMG) standard, Semantics of Business Vocabulary and Rules (SBVR). This model, namely, SBVR model for service choreographies focuses on a declarative deontic rule expressing both obligation and prohibition, which can be more useful in working with coordinating service interactions. The generated SBVR model is then be formulated and be transformed into Alloy model using Alloy Analyzer for verifying the generated SBVR model. The transformation of SBVR into Alloy allows to automatically generate the corresponding coordination of service interactions (service choreography), hence producing an immediate instance of execution that satisfies the constraints of the specification and verifies whether a specific request can be realised in the given choreography in the generated choreography.Keywords: service choreography, service coordination, behavioural modelling, complex interactions, declarative specification, verification, model transformation, semantics of business vocabulary and rules, SBVR
Procedia PDF Downloads 1541288 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications
Authors: Wahab Ali Shah, Junjia He
Abstract:
Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.Keywords: coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency
Procedia PDF Downloads 2491287 Comparison between the Performances of Different Boring Bars in the Internal Turning of Long Overhangs
Authors: Wallyson Thomas, Zsombor Fulop, Attila Szilagyi
Abstract:
Impact dampers are mainly used in the metal-mechanical industry in operations that generate too much vibration in the machining system. Internal turning processes become unstable during the machining of deep holes, in which the tool holder is used with long overhangs (high length-to-diameter ratios). The devices coupled with active dampers, are expensive and require the use of advanced electronics. On the other hand, passive impact dampers (PID – Particle Impact Dampers) are cheaper alternatives that are easier to adapt to the machine’s fixation system, once that, in this last case, a cavity filled with particles is simply added to the structure of the tool holder. The cavity dimensions and the diameter of the spheres are pre-determined. Thus, when passive dampers are employed during the machining process, the vibration is transferred from the tip of the tool to the structure of the boring bar, where it is absorbed by the fixation system. This work proposes to compare the behaviors of a conventional solid boring bar and a boring bar with a passive impact damper in turning while using the highest possible L/D (length-to-diameter ratio) of the tool and an Easy Fix fixation system (also called: Split Bushing Holding System). It is also intended to optimize the impact absorption parameters, as the filling percentage of the cavity and the diameter of the spheres. The test specimens were made of hardened material and machined in a Computer Numerical Control (CNC) lathe. The laboratory tests showed that when the cavity of the boring bar is totally filled with minimally spaced spheres of the largest diameter, the gain in absorption allowed of obtaining, with an L/D equal to 6, the same surface roughness obtained when using the solid boring bar with an L/D equal to 3.4. The use of the passive particle impact damper resulted in, therefore, increased static stiffness and reduced deflexion of the tool.Keywords: active damper, fixation system, hardened material, passive damper
Procedia PDF Downloads 2201286 In vitro Comparison Study of Biologically Synthesized Cupper-Disulfiram Nanoparticles with Its Free Corresponding Complex as Therapeutic Approach for Breast and Liver Cancer
Authors: Marwa M. Abu-Serie, Marwa M. Eltarahony
Abstract:
The search for reliable, effective, and safe nanoparticles (NPs) as a treatment for cancer is a pressing priority. In this study, Cu-NPs were fabricated by Streptomyces cyaneofuscatus through simultaneous bioreduction strategy of copper nitrate salt. The as-prepared Cu-NPs subjected to structural analysis; energy-dispersive X-ray spectroscopy, elemental mapping, X-ray diffraction, transmission electron microscopy, and ζ-potential. These biological synthesized Cu-NPs were mixed with disulfiram (DS), forming a nanocomplex of Cu-DS with a size of ~135 nm. The prepared nanocomplex (nanoCu-DS) exhibited higher anticancer activity than that of free complex of DS-Cu, Cu-NPs, and DS alone. This was illustrated by the lowest IC50 of nanoCu-DS (< 4 µM) against human breast and liver cancer cell lines comparing with DS-Cu, Cu-NPs, and DS (~8, 22.98-33.51 and 11.95-14.86, respectively). Moreover, flow cytometric analysis confirmed that higher apoptosis percentage range of nanoCu-DS-treated in MDA-MB 231, MCF-7, Huh-7, and HepG-2 cells (51.24-65.28%) than free complex of Cu-DS ( < 4.5%). Regarding inhibition potency of liver and breast cancer cell migration, no significant difference was recorded between free and nanocomplex. Furthermore, nanoCu-DS suppressed gene expression of β-catenine, Akt, and NF-κB and upregulated p53 expression (> 3, >15, > 5 and ≥ 3 folds, respectively) more efficiently than free complex (all ~ 1 fold) in MDA-MB 231 and Huh-7 cells. Our finding proved this prepared nano complex has a powerful anticancer activity relative to free complex, thereby offering a promising cancer treatment.Keywords: biologically prepared Cu-NPs, breast cancer cell lines, liver cancer cell lines, nanoCu- disulfiram
Procedia PDF Downloads 1891285 Denoising Convolutional Neural Network Assisted Electrocardiogram Signal Watermarking for Secure Transmission in E-Healthcare Applications
Authors: Jyoti Rani, Ashima Anand, Shivendra Shivani
Abstract:
In recent years, physiological signals obtained in telemedicine have been stored independently from patient information. In addition, people have increasingly turned to mobile devices for information on health-related topics. Major authentication and security issues may arise from this storing, degrading the reliability of diagnostics. This study introduces an approach to reversible watermarking, which ensures security by utilizing the electrocardiogram (ECG) signal as a carrier for embedding patient information. In the proposed work, Pan-Tompkins++ is employed to convert the 1D ECG signal into a 2D signal. The frequency subbands of a signal are extracted using RDWT(Redundant discrete wavelet transform), and then one of the subbands is subjected to MSVD (Multiresolution singular valued decomposition for masking. Finally, the encrypted watermark is embedded within the signal. The experimental results show that the watermarked signal obtained is indistinguishable from the original signals, ensuring the preservation of all diagnostic information. In addition, the DnCNN (Denoising convolutional neural network) concept is used to denoise the retrieved watermark for improved accuracy. The proposed ECG signal-based watermarking method is supported by experimental results and evaluations of its effectiveness. The results of the robustness tests demonstrate that the watermark is susceptible to the most prevalent watermarking attacks.Keywords: ECG, VMD, watermarking, PanTompkins++, RDWT, DnCNN, MSVD, chaotic encryption, attacks
Procedia PDF Downloads 1011284 Molecular Epidemiology of Rotavirus in Post-Vaccination Era in Pediatric Patients with Acute Gastroenteritis in Thailand
Authors: Nutthawadee Jampanil, Kattareeya Kumthip, Niwat Maneekarn, Pattara Khamrin
Abstract:
Rotavirus A is one of the leading causes of acute gastroenteritis in children younger than five years of age, especially in low-income countries in Africa and South Asia. Two live-attenuated oral rotavirus vaccines, Rotarix and RotaTeq, have been introduced into routine immunization programs in many countries and have proven highly effective in reducing the burden of rotavirus-associated morbidity and mortality. In Thailand, Rotarix and RotaTeq vaccines have been included in the national childhood immunization program since 2020. The objectives of this research are to conduct a molecular epidemiological study and to characterize rotavirus genotypes circulating in pediatric patients with acute diarrhea in Chiang Mai, Thailand, from 2020-2022 after the implementation of rotavirus vaccines. Out of 858 stool specimens, 26 (3.0%) were positive for rotavirus A. G3P[8] (23.0%) was detected as the most predominant genotype, followed by G1P[8] (19.2%), G8P[8] (19.2%), G9P[8] (15.3%), G2P[4] (7.7%), G1P[6] (3.9%), G9P[4] (3.9%), and G8P[X] (3.9%). In addition, the uncommon rotavirus strain G3P[23] (3.9%) was also detected in this study, and this G3P[23] strain displayed a genetic background similar to the porcine rotavirus. In conclusion, there was a dramatic change in the prevalence of rotavirus A infection and the diversity of rotavirus A genotypes in pediatric patients in Chiang Mai, Northern Thailand, in the rotavirus post-vaccination period. The finding obtained from this research contributes to a better understanding of rotavirus epidemiology after rotavirus vaccine introduction. Furthermore, the identification of unusual G and P genotype combination strains provides significant evidence for the potential interspecies transmission between human and animal rotaviruses.Keywords: rotavirus, infectious disease, gastroenteritis, Thailand
Procedia PDF Downloads 681283 Raman Spectroscopic of Cardioprotective Mechanism During the Metabolic Inhibition of Heart Cells
Authors: A. Almohammedi, A. J. Hudson, N. M. Storey
Abstract:
Following ischaemia/reperfusion injury, as in a myocardial infraction, cardiac myocytes undergo oxidative stress which leads to several potential outcomes including; necrotic or apoptotic cell death or dysregulated calcium homeostasis or disruption of the electron transport chain. Several studies have shown that nitric oxide donors protect cardiomyocytes against ischemia and reperfusion. However until present, the mechanism of cardioprotective effect of nitric oxide donor in isolated ventricular cardiomyocytes is not fully understood and has not been investigated before using Raman spectroscopy. For these reasons, the aim of this study was to develop a novel technique, pre-resonance Raman spectroscopy, to investigate the mechanism of cardioprotective effect of nitric oxide donor in isolated ventricular cardiomyocytes exposed to metabolic inhibition and re-energisation. The results demonstrated the first time that Raman microspectroscopy technique has the capability to monitor the metabolic inhibition of cardiomyocytes and to monitor the effectiveness of cardioprotection by nitric oxide donor prior to metabolic inhibition of cardiomyocytes. Metabolic inhibition and reenergisation were used in this study to mimic the low and high oxygen levels experienced by cells during ischaemic and reperfusion treatments. A laser wavelength of 488 nm used in this study has been found to provide the most sensitive means of observe the cellular mechanisms of myoglobin during nitric oxide donor preconditioning, metabolic inhibition and re-energisation and did not cause any damage to the cells. The data also highlight the considerably different cellular responses to metabolic inhibition to ischaemia. Moreover, the data has been shown the relationship between the release of myoglobin and chemical ischemia where that the release of myoglobin from the cell only occurred if a cell did not recover contractility.Keywords: ex vivo biospectroscopy, Raman spectroscopy, biophotonics, cardiomyocytes, ischaemia / reperfusion injury, cardioprotection, nitric oxide donor
Procedia PDF Downloads 3521282 New Analytical Current-Voltage Model for GaN-based Resonant Tunneling Diodes
Authors: Zhuang Guo
Abstract:
In the field of GaN-based resonant tunneling diodes (RTDs) simulations, the traditional Tsu-Esaki formalism failed to predict the values of peak currents and peak voltages in the simulated current-voltage(J-V) characteristics. The main reason is that due to the strong internal polarization fields, two-dimensional electron gas(2DEG) accumulates at emitters, resulting in 2D-2D resonant tunneling currents, which become the dominant parts of the total J-V characteristics. By comparison, based on the 3D-2D resonant tunneling mechanism, the traditional Tsu-Esaki formalism cannot predict the J-V characteristics correctly. To overcome this shortcoming, we develop a new analytical model for the 2D-2D resonant tunneling currents generated in GaN-based RTDs. Compared with Tsu-Esaki formalism, the new model has made the following modifications: Firstly, considering the Heisenberg uncertainty, the new model corrects the expression of the density of states around the 2DEG eigenenergy levels at emitters so that it could predict the half width at half-maximum(HWHM) of resonant tunneling currents; Secondly, taking into account the effect of bias on wave vectors on the collectors, the new model modifies the expression of the transmission coefficients which could help to get the values of peak currents closer to the experiment data compared with Tsu-Esaki formalism. The new analytical model successfully predicts the J-V characteristics of GaN-based RTDs, and it also reveals more detailed mechanisms of resonant tunneling happened in GaN-based RTDs, which helps to design and fabricate high-performance GaN RTDs.Keywords: GaN-based resonant tunneling diodes, tsu-esaki formalism, 2D-2D resonant tunneling, heisenberg uncertainty
Procedia PDF Downloads 761281 A Review on the Use of Herbal Alternatives to Antibiotics in Poultry Diets
Authors: Sasan Chalaki, Seyed Ali Mirgholange, Touba Nadri, Saman Chalaki
Abstract:
In the current world, proper poultry nutrition has garnered special attention as one of the fundamental factors for enhancing their health and performance. Concerns related to the excessive use of antibiotics in the poultry industry and their role in antibiotic resistance have transformed this issue into a global challenge in public health and the environment. On the other hand, poultry farming plays a vital role as a primary source of meat and eggs in human nutrition, and improving their health and performance is crucial. One effective approach to enhance poultry nutrition is the utilization of the antibiotic properties of plant-based ingredients. The use of plant-based alternatives as natural antibiotics in poultry nutrition not only aids in improving poultry health and performance but also plays a significant role in reducing the consumption of synthetic antibiotics and preventing antibiotic resistance-related issues. Plants contain various antibacterial compounds, such as flavonoids, tannins, and essential oils. These compounds are recognized as active agents in combating bacteria. Plant-based antibiotics are compounds extracted from plants with antibacterial properties. They are acknowledged as effective substitutes for chemical antibiotics in poultry diets. The advantages of plant-based antibiotics include reducing the risk of resistance to chemical antibiotics, increasing poultry growth performance, and lowering the risk of disease transmission.Keywords: poultry, antibiotics, essential oils, plant-based
Procedia PDF Downloads 771280 Proactive Change or Adaptive Response: A Study on the Impact of Digital Transformation Strategy Modes on Enterprise Profitability From a Configuration Perspective
Authors: Jing-Ma
Abstract:
Digital transformation (DT) is an important way for manufacturing enterprises to shape new competitive advantages, and how to choose an effective DT strategy is crucial for enterprise growth and sustainable development. Rooted in strategic change theory, this paper incorporates the dimensions of managers' digital cognition, organizational conditions, and external environment into the same strategic analysis framework and integrates the dynamic QCA method and PSM method to study the antecedent grouping of the DT strategy mode of manufacturing enterprises and its impact on corporate profitability based on the data of listed manufacturing companies in China from 2015 to 2019. We find that the synergistic linkage of different dimensional elements can form six equivalent paths of high-level DT, which can be summarized as the proactive change mode of resource-capability dominated as well as adaptive response mode such as industry-guided resource replenishment. Capacity building under complex environments, market-industry synergy-driven, forced adaptation under peer pressure, and the managers' digital cognition play a non-essential but crucial role in this process. Except for individual differences in the market industry collaborative driving mode, other modes are more stable in terms of individual and temporal changes. However, it is worth noting that not all paths that result in high levels of DT can contribute to enterprise profitability, but only high levels of DT that result from matching the optimization of internal conditions with the external environment, such as industry technology and macro policies, can have a significant positive impact on corporate profitability.Keywords: digital transformation, strategy mode, enterprise profitability, dynamic QCA, PSM approach
Procedia PDF Downloads 241279 Investigation of the Effect of Impulse Voltage to Flashover by Using Water Jet
Authors: Harun Gülan, Muhsin Tunay Gencoglu, Mehmet Cebeci
Abstract:
The main function of the insulators used in high voltage (HV) transmission lines is to insulate the energized conductor from the pole and hence from the ground. However, when the insulators fail to perform this insulation function due to various effects, failures occur. The deterioration of the insulation results either from breakdown or surface flashover. The surface flashover is caused by the layer of pollution that forms conductivity on the surface of the insulator, such as salt, carbonaceous compounds, rain, moisture, fog, dew, industrial pollution and desert dust. The source of the majority of failures and interruptions in HV lines is surface flashover. This threatens the continuity of supply and causes significant economic losses. Pollution flashover in HV insulators is still a serious problem that has not been fully resolved. In this study, a water jet test system has been established in order to investigate the behavior of insulators under dirty conditions and to determine their flashover performance. Flashover behavior of the insulators is examined by applying impulse voltages in the test system. This study aims to investigate the insulator behaviour under high impulse voltages. For this purpose, a water jet test system was installed and experimental results were obtained over a real system and analyzed. By using the water jet test system instead of the actual insulator, the damage to the insulator as a result of the flashover that would occur under impulse voltage was prevented. The results of the test system performed an important role in determining the insulator behavior and provided predictability.Keywords: insulator, pollution flashover, high impulse voltage, water jet model
Procedia PDF Downloads 1101278 Alternative Water Resources and Brominated Byproducts
Authors: Nora Kuiper, Candace Rowell, Hugues Preud'Homme, Basem Shomar
Abstract:
As the global dependence on seawater desalination as a primary drinking water resource increases, a unique class of secondary pollutants is emerging. The presence of bromide salts in seawater may result in increased levels of bromine and brominated byproducts in drinking water. The State of Qatar offers a unique setting to study these pollutants and their impacts on consumers as the country is 100% dependent on seawater desalination to supply municipal tap water and locally produced bottled water. Tap water (n=115) and bottled water (n=62) samples were collected throughout the State of Qatar and analyzed for a suite of inorganic and organic compounds, including 54 volatile organic compounds (VOCs), with an emphasis on brominated byproducts. All VOC identification and quantification was completed using a Bruker Scion GCMSMS with static headspace technologies. A risk survey tool was used to collect information regarding local consumption habits, health outcomes and perception of water sources for adults and children. This study is the first of its kind in the country. Dibromomethane, bromoform, and bromobenzene were detected in 61%, 88% and 2%, of the drinking water samples analyzed. The levels of dibromomethane ranged from approximately 100-500 ng/L and the concentrations of bromoform ranged from approximately 5-50 µg/L. Additionally, bromobenzene concentrations were 60 ng/L. The presence of brominated compounds in drinking water is a public health concern specific to populations using seawater as a feed water source and may pose unique risks that have not been previously studied. Risk assessments are ongoing to quantify the risks associated with prolonged consumption of disinfection byproducts; specifically the risks of brominated trihalomethanes as the levels of bromoform found in Qatar’s drinking water reach more than 60% of the US EPA’s Maximum Contaminant Level of all THMs.Keywords: brominated byproducts, desalination, trihalomethanes, risk assessment
Procedia PDF Downloads 4281277 Awareness regarding Radiation Protection among the Technicians Practicing in Bharatpur, Chitwan, Nepal
Authors: Jayanti Gyawali, Deepak Adhikari, Mukesh Mallik, Sanjay Sah
Abstract:
Radiation is defined as an emission or transmission of energy in form of waves or particles through space or material medium. The major imaging tools used in diagnostic radiology is based on the use of ionizing radiation. A cross-sectional study was carried out during July- August, 2015 among technicians in 15 different hospitals of Bharatpur, Chitwan, Nepal to assess awareness regarding radiation protection and their current practice. The researcher was directly engaged for data collection using self-administered semi-structured questionnaire. The findings of the study are presented in socio-demographic characteristics of respondents, current practice of respondents and knowledge regarding radiation protection. The result of this study demonstrated that despite the importance of radiation and its consequent hazards, the level of knowledge among technicians is only 60.23% and their current practice is 76.84%. The difference in the mean score of knowledge and practice might have resulted due to technicians’s regular work and lack of updates. The study also revealed that there is no significant (p>0.05) difference in knowledge level of technicians practicing in different hospitals. But the mean difference in practice scores of different hospital is significant (p<0.05) i.e. i.e. the cancer hospital with large volumes of regular radiological cases and radiation therapies for cancer treatment has better practice in comparison to other hospitals. The deficiency in knowledge of technicians might alter the expected benefits, compared to the risk involved, and can cause erroneous medical diagnosis and radiation hazard. Therefore, this study emphasizes the need for all technicians to update themselves with the appropriate knowledge and current practice about ionizing and non-ionizing radiation.Keywords: technicians, knowledge, Nepal, radiation
Procedia PDF Downloads 3301276 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification
Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo
Abstract:
For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle
Procedia PDF Downloads 1341275 On the Effectiveness of Educational Technology on the Promotion of Exceptional Children or Children with Special Needs
Authors: Nasrin Badrkhani
Abstract:
The increasing use of educational technologies has created a tremendous transformation in all fields and most importantly, in the field of education and learning. In recent decades, traditional learning approaches have undergone fundamental changes with the emergence of new learning technologies. Research shows that suitable educational tools play an effective role in the transmission, comprehension, and impact of educational concepts. These tools provide a tangible basis for thinking and constructing concepts, resulting in an increased interest in learning. They provide real and true experiences to students and convey educational meanings and concepts more quickly and clearly. It can be said that educational technology, as an active and modern teaching method, with capabilities such as engaging multiple senses in the educational process and involving the learner, makes the learning environment more flexible. It effectively impacts the skills of children with special needs by addressing their specific needs. Teachers are no longer the sole source of information, and students are not mere recipients of information. They are considered the main actors in the field of education and learning. Since education is one of the basic rights of every human being and children with special needs face unique challenges and obstacles in education, these challenges can negatively affect their abilities and learning. To combat these challenges, one of the ways is to use educational technologies for more diverse, effective learning. Also, the use of educational technology for students with special needs has increasingly proven effective in boosting their self-confidence and helping them overcome learning challenges, enhancing their learning outcomes.Keywords: communication technology, students with special needs, self-confidence, raising the expectations and progress
Procedia PDF Downloads 131274 Integration of Thermal Energy Storage and Electric Heating with Combined Heat and Power Plants
Authors: Erich Ryan, Benjamin McDaniel, Dragoljub Kosanovic
Abstract:
Combined heat and power (CHP) plants are an efficient technology for meeting the heating and electric needs of large campus energy systems, but have come under greater scrutiny as the world pushes for emissions reductions and lower consumption of fossil fuels. The electrification of heating and cooling systems offers a great deal of potential for carbon savings, but these systems can be costly endeavors due to increased electric consumption and peak demand. Thermal energy storage (TES) has been shown to be an effective means of improving the viability of electrified systems, by shifting heating and cooling load to off-peak hours and reducing peak demand charges. In this study, we analyze the integration of an electrified heating and cooling system with thermal energy storage into a campus CHP plant, to investigate the potential of leveraging existing infrastructure and technologies with the climate goals of the 21st century. A TRNSYS model was built to simulate a ground source heat pump (GSHP) system with TES using measured campus heating and cooling loads. The GSHP with TES system is modeled to follow the parameters of industry standards and sized to provide an optimal balance of capital and operating costs. Using known CHP production information, costs and emissions were investigated for a unique large energy user rate structure that operates a CHP plant. The results highlight the cost and emissions benefits of a targeted integration of heat pump technology within the framework of existing CHP systems, along with the performance impacts and value of TES capability within the combined system.Keywords: thermal energy storage, combined heat and power, heat pumps, electrification
Procedia PDF Downloads 891273 Analyzing the Impact of Migration on HIV and AIDS Incidence Cases in Malaysia
Authors: Ofosuhene O. Apenteng, Noor Azina Ismail
Abstract:
The human immunodeficiency virus (HIV) that causes acquired immune deficiency syndrome (AIDS) remains a global cause of morbidity and mortality. It has caused panic since its emergence. Relationships between migration and HIV/AIDS have become complex. In the absence of prospectively designed studies, dynamic mathematical models that take into account the migration movement which will give very useful information. We have explored the utility of mathematical models in understanding transmission dynamics of HIV and AIDS and in assessing the magnitude of how migration has impact on the disease. The model was calibrated to HIV and AIDS incidence data from Malaysia Ministry of Health from the period of 1986 to 2011 using Bayesian analysis with combination of Markov chain Monte Carlo method (MCMC) approach to estimate the model parameters. From the estimated parameters, the estimated basic reproduction number was 22.5812. The rate at which the susceptible individual moved to HIV compartment has the highest sensitivity value which is more significant as compared to the remaining parameters. Thus, the disease becomes unstable. This is a big concern and not good indicator from the public health point of view since the aim is to stabilize the epidemic at the disease-free equilibrium. However, these results suggest that the government as a policy maker should make further efforts to curb illegal activities performed by migrants. It is shown that our models reflect considerably the dynamic behavior of the HIV/AIDS epidemic in Malaysia and eventually could be used strategically for other countries.Keywords: epidemic model, reproduction number, HIV, MCMC, parameter estimation
Procedia PDF Downloads 3661272 Crystallization in the TeO2 - Ta2O5 - Bi2O3 System: From Glass to Anti-Glass to Transparent Ceramic
Authors: Hasnaa Benchorfi
Abstract:
The Tellurite glasses exhibit interesting properties, notably their low melting point (700-900°C), high refractive index (≈2), high transparency in the infrared region (up to 5−6 μm), interesting linear and non-linear optical properties and high rare earth ions solubility. These properties give tellurite glasses a great interest in various optical applications. Transparent ceramics present advantages compared to glasses, such as improved mechanical, thermal and optical properties. But, the elaboration process of these ceramics requires complex sintering conditions. The full crystallization of glass into transparent ceramics is an alternative to circumvent the technical challenges related to the ceramics obtained by conventional processing. In this work, a crystallization study of a specific glass composition in the system TeO2-Ta2O5-Bi2O3 shows structural transitions from the glass to the stabilization of an unreported anti-glass phase to a transparent ceramic upon heating. An anti-glass is a material with a cationic long-range order and a disordered anion sublattice. Thus, the X-ray diffraction patterns show sharp peaks, while the Raman bands are broad and similar to those of the parent glass. The structure and microstructure of the anti-glass and corresponding ceramic were characterized by Powder X-Ray Diffraction, Electron Back Scattered Diffraction, Transmission Electron Microscopy and Raman spectroscopy. The optical properties of the Er3+-doped samples are also discussed.Keywords: glass, congruent crystallization, anti-glass, glass-ceramic, optics
Procedia PDF Downloads 791271 Investigation of Cost Effective Double Layered Slab for γ-Ray Shielding
Authors: Kulwinder Singh Mann, Manmohan Singh Heer, Asha Rani
Abstract:
The safe storage of radioactive materials has become an important issue. Nuclear engineering necessitates the safe handling of radioactive materials emitting high energy gamma-rays. Hazards involved in handling radioactive materials insist suitable shielded enclosures. With overgrowing use of nuclear energy for meeting the increasing demand of power, there is a need to investigate the shielding behavior of cost effective shielded enclosure (CESE) made from clay-bricks (CB) and fire-bricks (FB). In comparison to the lead-bricks (conventional-shielding), the CESE are the preferred choice in nuclear waste management. The objective behind the present investigation is to evaluate the double layered transmission exposure buildup factors (DLEBF) for gamma-rays for CESE in energy range 0.5-3MeV. For necessary computations of shielding parameters, using existing huge data regarding gamma-rays interaction parameters of all periodic table elements, two computer programs (GRIC-toolkit and BUF-toolkit) have been designed. It has been found that two-layered slabs show effective shielding for gamma-rays in orientation CB followed by FB than the reverse. It has been concluded that the arrangement, FB followed by CB reduces the leakage of scattered gamma-rays from the radioactive source.Keywords: buildup factor, clay bricks, fire bricks, nuclear wastage management, radiation protective double layered slabs
Procedia PDF Downloads 4061270 Design and Fabrication of Stiffness Reduced Metallic Locking Compression Plates through Topology Optimization and Additive Manufacturing
Authors: Abdulsalam A. Al-Tamimi, Chris Peach, Paulo Rui Fernandes, Paulo J. Bartolo
Abstract:
Bone fixation implants currently used to treat traumatic fractured bones and to promote fracture healing are built with biocompatible metallic materials such as stainless steel, cobalt chromium and titanium and its alloys (e.g., CoCrMo and Ti6Al4V). The noticeable stiffness mismatch between current metallic implants and host bone associates with negative outcomes such as stress shielding which causes bone loss and implant loosening leading to deficient fracture treatment. This paper, part of a major research program to design the next generation of bone fixation implants, describes the combined use of three-dimensional (3D) topology optimization (TO) and additive manufacturing powder bed technology (Electron Beam Melting) to redesign and fabricate the plates based on the current standard one (i.e., locking compression plate). Topology optimization is applied with an objective function to maximize the stiffness and constraint by volume reductions (i.e., 25-75%) in order to obtain optimized implant designs with reduced stress shielding phenomenon, under different boundary conditions (i.e., tension, bending, torsion and combined loads). The stiffness of the original and optimised plates are assessed through a finite-element study. The TO results showed actual reduction in the stiffness for most of the plates due to the critical values of volume reduction. Additionally, the optimized plates fabricated using powder bed techniques proved that the integration between the TO and additive manufacturing presents the capability of producing stiff reduced plates with acceptable tolerances.Keywords: additive manufacturing, locking compression plate, finite element, topology optimization
Procedia PDF Downloads 1971269 A Study in the Formation of a Term: Sahaba
Authors: Abdul Rahman Chamseddine
Abstract:
The Companions of the Prophet Muhammad, the Sahaba, are regarded as the first link between him and later believers who did not know him or learn from him directly. This makes the Sahaba a link in the chain between God and the ummah (community). Apart from their role in spreading the Prophet’s teachings, they came to be regarded as role models, representing the Islamic ideal of life as prescribed by the Prophet himself. According to Hadith, the Prophet had promised some Sahaba unqualified admission to paradise. It is commonly agreed that the Sahaba have the following attributes in common: God is well pleased with them; they will surely go to paradise; they are perfectly trustworthy; and they are the authorities from whom Muslims can learn all matters related to their religion. No other generation of Muslims has received the attention received by the Companions of the Prophet. In spite of the importance of the Sahaba in Islam, we still know comparatively little about them. There are at least two reasons for this. First, there is the overall scarcity of information surviving from the early period. At the death of the Prophet, it is said, there were more than 100,000 Companions. As we shall see, this is a complex issue, involving the definition of the term Sahaba. However, only few Companions of the Prophet are known to us. Ibn Hajar al-‘Asqalani, who wrote in the fifteenth century A.D., was only able to collect facts about 11,000 of them (including those whose status as Sahaba was disputed). Ibn Sa‘d, Ibn ‘Abd al-Barr and Ibn al-Athir, all of whom lived earlier than Ibn Hajar, included in their respective works fewer lives of Sahaba than he did. If we consider Ibn Hajar’s Isaba as the most complete biographical account of the Sahaba that remains available, we have information, presumably, on approximately one tenth of them. The remaining nine tenths are apparently lost from the historical record. Second, discussion of the Sahaba tends to focus on those considered the most important among them such as ‘Uthman, ‘Ali and Mu‘awiya, while others, who together number in the thousands, are less well-known. This paper will try to study the origins of the term Sahaba that became exclusive to the Companions of the Prophet and not a synonym of the word companions in general.Keywords: companions, Hadith, Islamic history, Muhammad, Sahaba, transmission
Procedia PDF Downloads 4161268 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1521267 Size Effects on Structural Performance of Concrete Gravity Dams
Authors: Mehmet Akköse
Abstract:
Concern about seismic safety of concrete dams have been growing around the world, partly because the population at risk in locations downstream of major dams continues to expand and also because it is increasingly evident that the seismic design concepts in use at the time most existing dams were built were inadequate. Most of the investigations in the past have been conducted on large dams, typically above 100m high. A large number of concrete dams in our country and in other parts of the world are less than 50m high. Most of these dams were usually designed using pseudo-static methods, ignoring the dynamic characteristics of the structure as well as the characteristics of the ground motion. Therefore, it is important to carry out investigations on seismic behavior this category of dam in order to assess and evaluate the safety of existing dams and improve the knowledge for different high dams to be constructed in the future. In this study, size effects on structural performance of concrete gravity dams subjected to near and far-fault ground motions are investigated including dam-water-foundation interaction. For this purpose, a benchmark problem proposed by ICOLD (International Committee on Large Dams) is chosen as a numerical application. Structural performance of the dam having five different heights is evaluated according to damage criterions in USACE (U.S. Army Corps of Engineers). It is decided according to their structural performance if non-linear analysis of the dams requires or not. The linear elastic dynamic analyses of the dams to near and far-fault ground motions are performed using the step-by-step integration technique. The integration time step is 0.0025 sec. The Rayleigh damping constants are calculated assuming 5% damping ratio. The program NONSAP modified for fluid-structure systems with the Lagrangian fluid finite element is employed in the response calculations.Keywords: concrete gravity dams, Lagrangian approach, near and far-fault ground motion, USACE damage criterions
Procedia PDF Downloads 2671266 Voltage Stability Margin-Based Approach for Placement of Distributed Generators in Power Systems
Authors: Oludamilare Bode Adewuyi, Yanxia Sun, Isaiah Gbadegesin Adebayo
Abstract:
Voltage stability analysis is crucial to the reliable and economic operation of power systems. The power system of developing nations is more susceptible to failures due to the continuously increasing load demand, which is not matched with generation increase and efficient transmission infrastructures. Thus, most power systems are heavily stressed, and the planning of extra generation from distributed generation sources needs to be efficiently done so as to ensure the security of the power system. Some voltage stability index-based approach for DG siting has been reported in the literature. However, most of the existing voltage stability indices, though sufficient, are found to be inaccurate, especially for overloaded power systems. In this paper, the performance of a relatively different approach using a line voltage stability margin indicator, which has proven to have better accuracy, has been presented and compared with a conventional line voltage stability index for DG siting using the Nigerian 28 bus system. Critical boundary index (CBI) for voltage stability margin estimation was deployed to identify suitable locations for DG placement, and the performance was compared with DG placement using the Novel Line Stability Index (NLSI) approach. From the simulation results, both CBI and NLSI agreed greatly on suitable locations for DG on the test system; while CBI identified bus 18 as the most suitable at system overload, NLSI identified bus 8 to be the most suitable. Considering the effect of the DG placement at the selected buses on the voltage magnitude profile, the result shows that the DG placed on bus 18 identified by CBI improved the performance of the power system better.Keywords: voltage stability analysis, voltage collapse, voltage stability index, distributed generation
Procedia PDF Downloads 931265 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 2771264 Injection Practices among Private Medical Practitioners of Karachi Pakistan
Authors: Mohammad Tahir Yousafzai, Nighat Nisar, Rehana Khalil
Abstract:
The aim of this study is to assess the practices of sharp injuries and factors leading to it among medical practitioners in slum areas of Karachi, Pakistan. A cross sectional study was conducted in slum areas of Landhi Town Karachi. All medical practitioners (317) running the private clinics in the areas were asked to participate in the study. Data was collected on self administered pre-tested structured questionnaires. The frequency with percentage and 95% confidence interval was calculated for at least one sharp injury (SI) in the last one year. The factors leading to sharp injuries were assessed using multiple logistic regressions. About 80% of private medical practitioners consented to participate. Among these 87% were males and 13% were female. The mean age was 38±11 years and mean work experience was 12±9 years. The frequency of at least one sharp injury in the last one year was 27%(95% CI: 22.2-32). Almost 47% of Sharp Injuries were caused by needle recapping, less work experience, less than 14 years of schooling, more than 20 patients per day, administering more than 30 injections per day, reuse of syringes and needle recapping after use were significantly associated with sharp injuries. Injection practices were found inadequate among private medical practitioners in slum areas of Karachi, and the frequency of Sharp Injuries was found high in these areas. There is a risk of occupational transmission of blood borne infections among medical practitioners warranting an urgent need for launching awareness and training on standard precautions for private medical practitioners in the slum areas of Karachi.Keywords: injection practices, private practitioners, sharp injuries, blood borne infections
Procedia PDF Downloads 4211263 Tritium Activities in Romania, Potential Support for Development of ITER Project
Authors: Gheorghe Ionita, Sebastian Brad, Ioan Stefanescu
Abstract:
In any fusion device, tritium plays a key role both as a fuel component and, due to its radioactivity and easy incorporation, as tritiated water (HTO). As for the ITER project, to reduce the constant potential of tritium emission, there will be implemented a Water Detritiation System (WDS) and an Isotopic Separation System (ISS). In the same time, during operation of fission CANDU reactors, the tritium content increases in the heavy water used as moderator and cooling agent (due to neutron activation) and it has to be reduced, too. In Romania, at the National Institute for Cryogenics and Isotopic Technologies (ICIT Rm-Valcea), there is an Experimental Pilot Plant for Tritium Removal (Exp. TRF), with the aim of providing technical data on the design and operation of an industrial plant for heavy water depreciation of CANDU reactors from Cernavoda NPP. The selected technology is based on the catalyzed isotopic exchange process between deuterium and liquid water (LPCE) combined with the cryogenic distillation process (CD). This paper presents an updated review of activities in the field carried out in Romania after the year 2000 and in particular those related to the development and operation of Tritium Removal Experimental Pilot Plant. It is also presented a comparison between the experimental pilot plant and industrial plant to be implemented at Cernavoda NPP. The similarities between the experimental pilot plant from ICIT Rm-Valcea and water depreciation and isotopic separation systems from ITER are also presented and discussed. Many aspects or 'opened issues' relating to WDS and ISS could be checked and clarified by a special research program, developed within ExpTRF. By these achievements and results, ICIT Rm - Valcea has proved its expertise and capability concerning tritium management therefore its competence may be used within ITER project.Keywords: ITER project, heavy water detritiation, tritium removal, isotopic exchange
Procedia PDF Downloads 4131262 Sphingosomes: Potential Anti-Cancer Vectors for the Delivery of Doxorubicin
Authors: Brajesh Tiwari, Yuvraj Dangi, Abhishek Jain, Ashok Jain
Abstract:
The purpose of the investigation was to evaluate the potential of sphingosomes as nanoscale drug delivery units for site-specific delivery of anti-cancer agents. Doxorubicin Hydrochloride (DOX) was selected as a model anti-cancer agent. Sphingosomes were prepared and loaded with DOX and optimized for size and drug loading. The formulations were characterized by Malvern zeta-seizer and Transmission Electron Microscopy (TEM) studies. Sphingosomal formulations were further evaluated for in-vitro drug release study under various pH profiles. The in-vitro drug release study showed an initial rapid release of the drug followed by a slow controlled release. In vivo studies of optimized formulations and free drug were performed on albino rats for comparison of drug plasma concentration. The in- vivo study revealed that the prepared system enabled DOX to have had enhanced circulation time, longer half-life and lower elimination rate kinetics as compared to free drug. Further, it can be interpreted that the formulation would selectively enter highly porous mass of tumor cells and at the same time spare normal tissues. To summarize, the use of sphingosomes as carriers of anti-cancer drugs may prove to be a fascinating approach that would selectively localize in the tumor mass, increasing the therapeutic margin of safety while reducing the side effects associated with anti-cancer agents.Keywords: sphingosomes, anti-cancer, doxorubicin, formulation
Procedia PDF Downloads 3031261 Ensuring Quality in DevOps Culture
Authors: Sagar Jitendra Mahendrakar
Abstract:
Integrating quality assurance (QA) practices into DevOps culture has become increasingly important in modern software development environments. Collaboration, automation and continuous feedback characterize the seamless integration of DevOps development and operations teams to achieve rapid and reliable software delivery. In this context, quality assurance plays a key role in ensuring that software products meet the highest quality, performance and reliability standards throughout the development life cycle. This brief explores key principles, challenges, and best practices related to quality assurance in a DevOps culture. This emphasizes the importance of quality transfer in the development process, as quality control processes are integrated in every step of the DevOps process. Automation is the cornerstone of DevOps quality assurance, enabling continuous testing, integration and deployment and providing rapid feedback for early problem identification and resolution. In addition, the summary addresses the cultural and organizational challenges of implementing quality assurance in DevOps, emphasizing the need to foster collaboration, break down silos, and promote a culture of continuous improvement. It also discusses the importance of toolchain integration and capability development to support effective QA practices in DevOps environments. Moreover, the abstract discusses the cultural and organizational challenges in implementing QA within DevOps, emphasizing the need for fostering collaboration, breaking down silos, and nurturing a culture of continuous improvement. It also addresses the importance of toolchain integration and skills development to support effective QA practices within DevOps environments. Overall, this collection works at the intersection of QA and DevOps culture, providing insights into how organizations can use DevOps principles to improve software quality, accelerate delivery, and meet the changing demands of today's dynamic software. landscape.Keywords: quality engineer, devops, automation, tool
Procedia PDF Downloads 58