Search results for: comprehensive meta-analysis
1143 Corporate Governance and Corporate Social Responsibility: Research on the Interconnection of Both Concepts and Its Impact on Non-Profit Organizations
Authors: Helene Eller
Abstract:
The aim of non-profit organizations (NPO) is to provide services and goods for its clientele, with profit being a minor objective. By having this definition as the basic purpose of doing business, it is obvious that the goal of an organisation is to serve several bottom lines and not only the financial one. This approach is underpinned by the non-distribution constraint which means that NPO are allowed to make profits to a certain extent, but not to distribute them. The advantage is that there are no single shareholders who might have an interest in the prosperity of the organisation: there is no pie to divide. The gained profits remain within the organisation and will be reinvested in purposeful projects. Good governance is mandatory to support the aim of NPOs. Looking for a measure of good governance the principals of corporate governance (CG) will come in mind. The purpose of CG is direction and control, and in the field of NPO, CG is enlarged to consider the relationship to all important stakeholders who have an impact on the organisation. The recognition of more relevant parties than the shareholder is the link to corporate social responsibility (CSR). It supports a broader view of the bottom line: It is no longer enough to know how profits are used but rather how they are made. Besides, CSR addresses the responsibility of organisations for their impact on society. When transferring the concept of CSR to the non-profit area it will become obvious that CSR with its distinctive features will match the aims of NPOs. As a consequence, NPOs who apply CG apply also CSR to a certain extent. The research is designed as a comprehensive theoretical and empirical analysis. First, the investigation focuses on the theoretical basis of both concepts. Second, the similarities and differences are outlined and as a result the interconnection of both concepts will show up. The contribution of this research is manifold: The interconnection of both concepts when applied to NPOs has not got any attention in science yet. CSR and governance as integrated concept provides a lot of advantages for NPOs compared to for-profit organisations which are in a steady justification to show the impact they might have on the society. NPOs, however, integrate economic and social aspects as starting point. For NPOs CG is not a mere concept of compliance but rather an enhanced concept integrating a lot of aspects of CSR. There is no “either-nor” between the concepts for NPOs.Keywords: business ethics, corporate governance, corporate social responsibility, non-profit organisations
Procedia PDF Downloads 2421142 A Theoretical Study on Pain Assessment through Human Facial Expresion
Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee
Abstract:
A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)
Procedia PDF Downloads 3481141 Online Versus Face-To-Face – How Do Video Consultations Change The Doctor-Patient-Interaction
Authors: Markus Feufel, Friederike Kendel, Caren Hilger, Selamawit Woldai
Abstract:
Since the corona pandemic, the use of video consultation has increased remarkably. For vulnerable groups such as oncological patients, the advantages seem obvious. But how does video consultation potentially change the doctor-patient relationship compared to face-to-face consultation? Which barriers may hinder the effective use of this consultation format in practice? We are presenting first results from a mixed-methods field study, funded by Federal Ministry of Health, which will provide the basis for a hands-on guide for both physicians and patients on how to improve the quality of video consultations. We use a quasi-experimental design to analyze qualitative and quantitative differences between face-to-face and video consultations based on video recordings of N = 64 actual counseling sessions (n = 32 for each consultation format). Data will be recorded from n = 32 gynecological and n = 32 urological cancer patients at two clinics. After the consultation, all patients will be asked to fill out a questionnaire about their consultation experience. For quantitative analyses, the counseling sessions will be systematically compared in terms of verbal and nonverbal communication patterns. Relative frequencies of eye contact and the information exchanged will be compared using 𝝌2 -tests. The validated questionnaire MAPPIN'Obsdyad will be used to assess the expression of shared decision-making parameters. In addition, semi-structured interviews will be conducted with n = 10 physicians and n = 10 patients experienced with video consultation, for which a qualitative content analysis will be conducted. We will elaborate the comprehensive methodological approach we used to compare video vs. face-to-face consultations and present first evidence on how video consultations change the doctor-patient interaction. We will also outline possible barriers of video consultations and best practices on how they may be overcome. Based on the results, we will present and discuss recommendations outlining best practices for how to prepare and conduct high-quality video consultations from the perspective of both physicians and patients.Keywords: video consultation, patient-doctor-relationship, digital applications, technical barriers
Procedia PDF Downloads 1411140 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 4191139 The Impact of Cryptocurrency Classification on Money Laundering: Analyzing the Preferences of Criminals for Stable Coins, Utility Coins, and Privacy Tokens
Authors: Mohamed Saad, Huda Ismail
Abstract:
The purpose of this research is to examine the impact of cryptocurrency classification on money laundering crimes and to analyze how the preferences of criminals differ according to the type of digital currency used. Specifically, we aim to explore the roles of stablecoins, utility coins, and privacy tokens in facilitating or hindering money laundering activities and to identify the key factors that influence the choices of criminals in using these cryptocurrencies. To achieve our research objectives, we used a dataset for the most highly traded cryptocurrencies (32 currencies) that were published on the coin market cap for 2022. In addition to conducting a comprehensive review of the existing literature on cryptocurrency and money laundering, with a focus on stablecoins, utility coins, and privacy tokens, Furthermore, we conducted several Multivariate analyses. Our study reveals that the classification of cryptocurrency plays a significant role in money laundering activities, as criminals tend to prefer certain types of digital currencies over others, depending on their specific needs and goals. Specifically, we found that stablecoins are more commonly used in money laundering due to their relatively stable value and low volatility, which makes them less risky to hold and transfer. Utility coins, on the other hand, are less frequently used in money laundering due to their lack of anonymity and limited liquidity. Finally, privacy tokens, such as Monero and Zcash, are increasingly becoming a preferred choice among criminals due to their high degree of privacy and untraceability. In summary, our study highlights the importance of understanding the nuances of cryptocurrency classification in the context of money laundering and provides insights into the preferences of criminals in using digital currencies for illegal activities. Based on our findings, our recommendation to the policymakers is to address the potential misuse of cryptocurrencies for money laundering. By implementing measures to regulate stable coins, strengthening cross-border cooperation, fostering public-private partnerships, and increasing cooperation, policymakers can help prevent and detect money laundering activities involving digital currencies.Keywords: crime, cryptocurrency, money laundering, tokens.
Procedia PDF Downloads 881138 Hydrodynamics of Undulating Ribbon-fin and Its Application in Bionic Underwater Robot
Authors: Zhang Jun, Zhai Shucheng, Bai Yaqiang, Zhang Guoping
Abstract:
The Gymnarchus Niioticus fish(GNF) cruises generally with high efficiency by undulating ribbon-fin propulsion while keeping its body for straight line. The swing amplitude of GNF fins is usually in 60° to 90°, and in normal state the amplitude is close to 90°, only in the control of hovering or swimming at very low speed, the amplitude is smaller (about 60°). It provides inspiration for underwater robot design. In the paper, the unsteady flow of undulating ribbon-fin propulsion is numerical simulated by the dynamic grid technique including spring-based smoothing model and local grid remeshing to adapt to the fin surface significantly deforming, and the swing amplitude of fin ray reaches 850. The numerical simulation method is validated by thrust experiments. The spatial vortex structure and its evolution with phase angle is analyzed. The propulsion mechanism is investigated by comprehensive analysis of the hydrodynamics, vortex structure, and pressure distribution on the fin surface. The numerical results indicates that there are mainly three kinds of vortexes, i.e. streamwise vortex, crescent vortex and toroidal vortex. The intensity of streamwise vortex is the strongest among all kinds of vortexes. Streamwise vortexes and crescent vortexes all alternately distribute on the two sides of mid-sagittal plane. Inside the crescent vortexes is high-speed flow, while outside is low-speed flow. The crescent vortexes mainly induce high-speed axial jet, which produces the primary thrust. This is hydrodynamic mechanism undulating ribbon-fin propulsion. The streamwise vortexes mainly induce the vertical jet, which generates the primary heave force. The effect on hydrodynamics of main geometry and movement parameters including wave length, amplitude and advanced coefficients is investigated. A bionic underwater robot with bilateral undulating ribbon-fins is designed, and its navigation performance and maneuverability are measured.Keywords: bionic propulsion, mobile robot, underwater robot, undulating ribbon-fins
Procedia PDF Downloads 2851137 The Importance and Feasibility of Hospital Interventions for Patient Aggression and Violence Against Physicians in China: A Delphi Study
Authors: Yuhan Wu, CTB (Kees) Ahaus, Martina Buljac-Samardzic
Abstract:
Patient aggression and violence is a complex occupational hazards for physicians working in hospitals, and it can have multiple severe negative effects for physicians and hospitals. Although there is a range of interventions in the healthcare sector applied in various countries, China lacks a comprehensive set of interventions at the hospital level in this area. Therefore, due to cultural differences, this study investigates whether international interventions are important and feasible in the Chinese cultural context by conducting a Delphi study. Based on a literature search, a list of 47 hospital interventions to prevent and manage patient aggression and violence was constructed, including 8 categories: hospital environment design, access and entrance, staffing and work practice, training and education, leadership and culture, support, during/after-the-event actions, and hospital policy. The list of interventions will be refined, extended and brought back during a three-round Delphi study. The panel consists of 17 Chinese experts, including physicians experiencing patient aggression and violence, hospital management team members, scientists working in this research area, and policymakers in the healthcare sector. In each round, experts will receive the possible interventions with the instruction to indicate the importance and feasibility of each intervention for preventing and managing patient violence and aggression in Chinese hospitals. Experts will be asked about the importance and feasibility of interventions for patient violence and aggression at the same time. This study will exclude or include interventions based on the score of importance. More specifically, an intervention will be included after each round if >80% of the experts judged it as important or very important and excluded if >50% judged an intervention as not or moderately important. The three-round Delphi study will provide a list of included interventions and assess which of the 8 categories of interventions are considered as important. It is expected that this study can bring new ideas and inspiration to Chinese hospitals in the prevention and management of patient aggression and violence.Keywords: patient aggression and violence, hospital interventions, feasibility, importance
Procedia PDF Downloads 971136 American Sign Language Recognition System
Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba
Abstract:
The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.Keywords: sign language, computer vision, vision transformer, VGG16, CNN
Procedia PDF Downloads 441135 Establishment of Decision Support Center for Managing Natural Hazard Consequence in Kuwait
Authors: Abdullah Alenezi, Mane Alsudrawi, Rafat Misak
Abstract:
Kuwait is faced with a potentially wide and harmful range of both natural and anthropogenic hazardous events such as dust storms, floods, fires, nuclear accidents, earthquakes, oil spills, tsunamis and other disasters. For Kuwait can be highly vulnerable to these complex environmental risks, an up-to-date and in-depth understanding of their typology, genesis, and impact on the Kuwaiti society is needed. Adequate anticipation and management of environmental crises further require a comprehensive system of decision support to the benefit of decision makers to further bridge the gap between (technical) risk understanding and public action. For that purpose, the Kuwait Institute for Scientific Research (KISR), intends to establish a decision support center for management of the environmental crisis in Kuwait. The center will support policy makers, stakeholders and national committees with technical information that helps them efficiently and effectively assess, monitor to manage environmental disasters using decision support tools. These tools will build on state of the art quantification and visualization techniques, such as remote sensing information, Geographical Information Systems (GIS), simulation and prediction models, early warning systems, etc. The center is conceived as a central facility which will be designed, operated and managed by KISR in coordination with national authorities and decision makers of the country. Our vision is that by 2035 the center will be recognized as a leading national source of scientific advice on national risk management in Kuwait and build unity of effort among Kuwaiti’s institutions, government agencies, public and private organizations through provision and sharing of information. The project team now focuses on capacity building through upgrading some KISR facilities manpower development, build strong collaboration with international alliance.Keywords: decision support, environment, hazard, Kuwait
Procedia PDF Downloads 3141134 Consumer’s Behavioral Responses to Corporate Social Responsibility Marketing: Mediating Impact of Customer Trust, Emotions, Brand Image, and Brand Attitude
Authors: Yasir Ali Soomro
Abstract:
Companies that demonstrate corporate social responsibilities (CSR) are more likely to withstand any downturn or crises because of the trust built with stakeholders. Many firms are utilizing CSR marketing to improve the interactions with their various stakeholders, mainly the consumers. Most previous research on CSR has focused on the impact of CSR on customer responses and behaviors toward a company. As online food ordering and grocery shopping remains inevitable. This study will investigate structural relationships among consumer positive emotions (CPE) and negative emotions (CNE), Corporate Reputation (CR), Customer Trust (CT), Brand Image (BI), and Brand attitude (BA) on behavioral outcomes such as Online purchase intention (OPI) and Word of mouth (WOM) in retail grocery and food restaurants setting. Hierarchy of Effects Model will be used as theoretical, conceptual framework. The model describes three stages of consumer behavior: (i) cognitive, (ii) affective, and (iii) conative. The study will apply a quantitative method to test the hypotheses; a self-developed questionnaire with non-probability sampling will be utilized to collect data from 500 consumers belonging to generation X, Y, and Z residing in KSA. The study will contribute by providing empirical evidence to support the link between CSR and customer affective and conative experiences in Saudi Arabia. The theoretical contribution of this study will be empirically tested comprehensive model where CPE, CNE, CR, CT, BI, and BA act as mediating variables between the perceived CSR & Online purchase intention (OPI) and Word of mouth (WOM). Further, the study will add more to how the emotional/ psychological process mediates in the CSR literature, especially in the Middle Eastern context. The proposed study will also explain the effect of perceived CSR marketing initiatives directly and indirectly on customer behavioral responses.Keywords: corporate social responsibility, corporate reputation, consumer emotions, loyalty, online purchase intention, word-of-mouth, structural equation modeling
Procedia PDF Downloads 931133 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 1261132 Investigation of Online Child Sexual Abuse: An Account of Covert Police Operations Across the Globe
Authors: Shivalaxmi Arumugham
Abstract:
Child sexual abuse (CSA) has taken several forms, particularly with the advent of internet technologies that provide pedophiles access to their targets anonymously at an affordable rate. To combat CSA which has far-reaching consequences on the physical and psychological health of the victims, a special act, the Protection of Children from Sexual Offences (POCSO) Act, was formulated amongst the existing laws. With its latest amendment criminalizing various online activities about child pornography also known as child sexual abuse materials in 2019, tremendous pressure is speculated on law enforcement to identify offenders online. Effective investigations of CSA cases help in not only to detect perpetrators but also in preventing the re-victimization of children. Understanding the vulnerability of the child population and that the offenders continue to develop stealthier strategies to operate, it is high time that traditional investigation, where the focus is on apprehending and prosecuting the offender, must make a paradigm shift to proactively investigate to prevent victimization at the first place. One of the proactive policing techniques involves understanding the psychology of the offenders and children and operating undercover to catch the criminals before a real child is victimized. With the fundamental descriptive approach to research, the article attempts to identify the multitude of issues associated with the investigation of child sexual abuse cases currently in practice in India. Then, the article contextualizes the various covert operations carried out by numerous law enforcement agencies across the globe. To provide this comprehensive overview, the paper examines various reports, websites, guidelines, protocols, judicial pronouncements, and research articles. Finally, the paper presents the challenges and ethical issues that are to be considered before getting into undercover operations either in the guise of a pedophile or as a child. The research hopes to contribute to the making of standard operating protocols for investigation officers and other relevant policymakers in this regard.Keywords: child sexual abuse, cybercrime against children, covert police operations, investigation of CSA
Procedia PDF Downloads 981131 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models
Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti
Abstract:
In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics
Procedia PDF Downloads 551130 Optimizing Sustainable Graphene Production: Extraction of Graphite from Spent Primary and Secondary Batteries for Advanced Material Synthesis
Authors: Pratima Kumari, Sukha Ranjan Samadder
Abstract:
This research aims to contribute to the sustainable production of graphene materials by exploring the extraction of graphite from spent primary and secondary batteries. The increasing demand for graphene materials, a versatile and high-performance material, necessitates environmentally friendly methods for its synthesis. The process involves a well-planned methodology, beginning with the gathering and categorization of batteries, followed by the disassembly and careful removal of graphite from anode structures. The use of environmentally friendly solvents and mechanical techniques ensures an efficient and eco-friendly extraction of graphite. Advanced approaches such as the modified Hummers' method and chemical reduction process are utilized for the synthesis of graphene materials, with a focus on optimizing parameters. Various analytical techniques such as Fourier-transform infrared spectroscopy, X-ray diffraction, scanning electron microscopy, thermogravimetric analysis, and Raman spectroscopy were employed to validate the quality and structure of the produced graphene materials. The major findings of this study reveal the successful implementation of the methodology, leading to the production of high-quality graphene materials suitable for advanced material applications. Thorough characterization using various advanced techniques validates the structural integrity and purity of the graphene. The economic viability of the process is demonstrated through a comprehensive economic analysis, highlighting the potential for large-scale production. This research contributes to the field of sustainable production of graphene materials by offering a systematic methodology that efficiently transforms spent batteries into valuable graphene resources. Furthermore, the findings not only showcase the potential for upcycling electronic waste but also address the pressing need for environmentally conscious processes in advanced material synthesis.Keywords: spent primary batteries, spent secondary batteries, graphite extraction, advanced material synthesis, circular economy approach
Procedia PDF Downloads 541129 The Lived Experiences of South African Female Offenders and the Possible Links to Recidivism Due to their Exclusion from Educational Rehabilitation Programmes
Authors: Jessica Leigh Thornton
Abstract:
The South African Constitution outlines provisions for every detainee and sentenced prisoner in relation to the human rights recognized in the country since 1994; but currently, across the country, prisons have yet to meet many of these criteria. Consequently, their day-to-day lives are marked by extreme lack of privacy, high rates of infection, poor nutrition, and deleterious living conditions, which steadily erode prisoners’ mental and physical capacities rather than rehabilitating inmates so that they can effectively reintegrate into society. Even more so, policy reform, advocacy, security, and rehabilitation programs continue to be based on research and theories that were developed to explain the experiences of men, while female offenders are seen as the “special category” of inmates. Yet, the experiences of women and their pathways to incarceration are remarkably different from those of male offenders. Consequently, little is known about the profile, nature and contributing factors and experiences of female offenders which has impeded a comprehensive and integrated understanding of the subject of female criminality. The number of women globally in correctional centers has more than doubled over the past fifteen years (these increases vary from prison to prison and country to country). Yet, female offenders have largely been ignored in research even though the minority status of female offenders is a phenomenon that is not peculiar to South Africa as the number of women incarcerated has increased by 68% within the decade. Within South Africa, there have been minimal studies conducted on the gendered experience of offenders. While some studies have explored the pathways to female offending, gender-sensitive correctional programming for women that respond to their needs has been overlooked. This often leads to a neglect of the needs of female offenders, not only in terms of programs and services delivery to this minority group but also from a research perspective. In response, the aim of the proposed research is twofold: Firstly, the lived experiences and views of rehabilitation and reintegration of female offenders will be explored. Secondly, the various pathways into and out of recidivism amongst female offenders will be investigated regarding their inclusion in educational rehabilitation.Keywords: female incarceration, educational rehabilitation, exclusion, experiences of female offenders
Procedia PDF Downloads 2731128 Single and Combined Effects of Diclofenac and Ibuprofen on Daphnia Magna and Some Phytoplankton Species
Authors: Ramatu I. Sha’aba, Mathias A. Chia, Abdullahi B. Alhassan, Yisa A. Gana, Ibrahim M. Gadzama
Abstract:
Globally, Diclofenac (DLC) and Ibuprofen (IBU) are the most prescribed drugs due to their antipyretic and analgesic properties. They are, however, highly toxic at elevated doses, with the involvement of an already described oxidative stress pathway. As a result, there is rising concern about the ecological fate of analgesics on non-target organisms such as Daphnia magna and Phytoplankton species. Phytoplankton is a crucial component of the aquatic ecosystem that serves as the primary producer at the base of the food chain. However, the increasing presence and levels of micropollutants such as these analgesics can disrupt their community structure, dynamics, and ecosystem functions. This study presents a comprehensive series of the physiology, antioxidant response, immobilization, and risk assessment of Diclofenac and Ibuprofen’s effects on Daphnia magna and the Phytoplankton community using a laboratory approach. The effect of DLC and IBU at 27.16 µg/L and 20.89 µg/L, respectively, for a single exposure and 22.39 µg/L for combined exposure of DLC and IBU for the experimental setup. The antioxidant response increased with increasing levels of stress. The highest stressor to the organism was 1000 µg/L of DLC and 10,000 µg/L of IBU. Peroxidase and glutathione -S-transferase activity was higher for Diclofenac + Ibuprofen. The study showed 60% and 70% immobilization of the organism at 1000 g L-1 of DLC and IBU. The two drugs and their combinations adversely impacted Phytoplankton biomass with increased exposure time. However, combining the drugs resulted in more significant adverse effects on physiological and pigment content parameters. The risk assessment calculation for the risk quotient and toxic unit of the analgesic reveals from this study was RQ Diclofenac = 8.41, TU Diclofenac = 3.68, and RQ Ibuprofen = 718.05 and TU Ibuprofen = 487.70. Hence, these findings demonstrate that the current exposure concentrations of Diclofenac and Ibuprofen can immobilize D. magna. This study shows the dangers of multiple drugs in the aquatic environment because their combinations could have additive effects on the structure and functions of Phytoplankton and are capable of immobilizing D. magna.Keywords: algae, analgesic drug, daphnia magna, toxicity
Procedia PDF Downloads 791127 Corn Flakes Produced from Different Cultivars of Zea Mays as a Functional Product
Authors: Milenko Košutić, Jelena Filipović, Zvonko Nježić
Abstract:
Extrusion technology is thermal processing that is applied to improve the nutritional, hygienic, and physical-chemical characteristics of the raw material. Overall, the extrusion process is an efficient method for the production of a wide range of food products. It combines heat, pressure, and shear to transform raw materials into finished goods with desired textures, shapes, and nutritional profiles. The extruded products’ quality is remarkably dependent upon feed material composition, barrel temperature profile, feed moisture content, screw speed, and other extrusion system parameters. Given consumer expectations for snack foods, a high expansion index and low bulk density, in addition to crunchy texture and uniform microstructure, are desired. This paper investigates the effects of simultaneous different types of corn (white corn, yellow corn, red corn, and black corn) addition and different screw speed (350, 500, 650 rpm) on the physical, technological, and functional properties of flakes products. Black corn flour and screw speed at 350 rpm positively influenced physical, technological characteristics, mineral composition, and antioxidant properties of flake products with the best total score analysis of 0,59. Overall, the combination of Tukey's HSD test and PCA enables a comprehensive analysis of the observed corn products, allowing researchers to identify them. This research aims to analyze the influence of different types of corn flour (white corn, yellow corn, red corn, and black corn) on the nutritive and sensory properties of the product (quality, texture, and color), as well as the acceptance of the new product by consumers on the territory of Novi Sad. The presented data point that investigated corn flakes from black corn flour at 350 rpm is a product with good physical-technological and functional properties due to a higher level of antioxidant activity.Keywords: corn types, flakes product, nutritive quality, acceptability
Procedia PDF Downloads 581126 A Comparative Analysis of Conventional and Organic Dairy Supply Chain: Assessing Transport Costs and External Effects in Southern Sweden
Authors: Vivianne Aggestam
Abstract:
Purpose: Organic dairy products have steadily increased with consumer popularity in recent years in Sweden, permitting more transport activities. The main aim of this study was to compare the transport costs and the environmental emissions made by the organic and conventional dairy production in Sweden. The objective was to evaluate differences and environmental impacts of transport between the two different production systems, allowing a more transparent understanding of the real impact of transport within the supply chain. Methods: A partial attributional Life Cycle Assessment has been conducted based on a comprehensive survey of Swedish farmers, dairies and consumers regarding their transport needs and costs. Interviews addressed the farmers and dairies. Consumers were targeted through an online survey. Results: Higher transport inputs from conventional dairy transportation are mainly via feed and soil management on farm level. The regional organic milk brand illustrate less initial transport burdens on farm level, however, after leaving the farm, it had equal or higher transportation requirements. This was mainly due to the location of the dairy farm and shorter product expiry dates, which requires more frequent retail deliveries. Organic consumers tend to use public transport more than private vehicles. Consumers using private vehicles for shopping trips primarily bought conventional products for which price was the main deciding factor. Conclusions: Organic dairy products that emphasise its regional attributes do not ensure less transportation and may therefore not be a more “climate smart” option for the consumer. This suggests that the idea of localism needs to be analysed from a more systemic perspective. Fuel and regional feed efficiency can be further implemented, mainly via fuel type and the types of vehicles used for transport.Keywords: supply chains, distribution, transportation, organic food productions, conventional food production, agricultural fossil fuel use
Procedia PDF Downloads 4551125 Operating Characteristics of Point-of-Care Ultrasound in Identifying Skin and Soft Tissue Abscesses in the Emergency Department
Authors: Sathyaseelan Subramaniam, Jacqueline Bober, Jennifer Chao, Shahriar Zehtabchi
Abstract:
Background: Emergency physicians frequently evaluate skin and soft tissue infections in order to differentiate abscess from cellulitis. This helps determine which patients will benefit from incision and drainage. Our objective was to determine the operating characteristics of point-of-care ultrasound (POCUS) compared to clinical examination in identifying abscesses in emergency department (ED) patients with features of skin and soft tissue infections. Methods: We performed a comprehensive search in the following databases: Medline, Web of Science, EMBASE, CINAHL and Cochrane Library. Trials were included if they compared the operating characteristics of POCUS with clinical examination in identifying skin and soft tissue abscesses. Trials that included patients with oropharyngeal abscesses or that requiring abscess drainage in the operating room were excluded. The presence of an abscess was determined by pus drainage. No pus seen on incision or resolution of symptoms without pus drainage at follow up, determined the absence of an abscess. Quality of included trials was assessed using GRADE criteria. Operating characteristics of POCUS are reported as sensitivity, specificity, positive likelihood (LR+) and negative likelihood (LR-) ratios and the respective 95% confidence intervals (CI). Summary measures were calculated by generating a hierarchical summary receiver operating characteristic model (HSROC). Results: Out of 3203 references identified, 5 observational studies with 615 patients in aggregate were included (2 adults and 3 pediatrics). We rated the quality of 3 trials as low and 2 as very low. The operating characteristics of POCUS and clinical examination in identifying soft tissue abscesses are presented in the table. The HSROC for POCUS revealed a sensitivity of 96% (95% CI = 89-98%), specificity of 79% (95% CI = 71-86), LR+ of 4.6 (95% CI = 3.2-6.8), and LR- of 0.06 (95% CI = 0.02-0.2). Conclusion: Existing evidence indicates that POCUS is useful in identifying abscesses in ED patients with skin or soft tissue infections.Keywords: abscess, point-of-care ultrasound, pocus, skin and soft tissue infection
Procedia PDF Downloads 3731124 Uncertainty and Multifunctionality as Bridging Concepts from Socio-Ecological Resilience to Infrastructure Finance in Water Resource Decision Making
Authors: Anita Lazurko, Laszlo Pinter, Jeremy Richardson
Abstract:
Uncertain climate projections, multiple possible development futures, and a financing gap create challenges for water infrastructure decision making. In contrast to conventional predict-plan-act methods, an emerging decision paradigm that enables social-ecological resilience supports decisions that are appropriate for uncertainty and leverage social, ecological, and economic multifunctionality. Concurrently, water infrastructure project finance plays a powerful role in sustainable infrastructure development but remains disconnected from discourse in socio-ecological resilience. At the time of research, a project to transfer water from Lesotho to Botswana through South Africa in the Orange-Senqu River Basin was at the pre-feasibility stage. This case was analysed through documents and interviews to investigate how uncertainty and multifunctionality are conceptualised and considered in decisions for the resilience of water infrastructure and to explore bridging concepts that might allow project finance to better enable socio-ecological resilience. Interviewees conceptualised uncertainty as risk, ambiguity and ignorance, and multifunctionality as politically-motivated shared benefits. Numerous efforts to adopt emerging decision methods that consider these terms were in use but required compromises to accommodate the persistent, conventional decision paradigm, though a range of future opportunities was identified. Bridging these findings to finance revealed opportunities to consider a more comprehensive scope of risk, to leverage risk mitigation measures, to diffuse risks and benefits over space, time and to diverse actor groups, and to clarify roles to achieve multiple objectives for resilience. In addition to insights into how multiple decision paradigms interact in real-world decision contexts, the research highlights untapped potential at the juncture between socio-ecological resilience and project finance.Keywords: socio-ecological resilience, finance, multifunctionality, uncertainty
Procedia PDF Downloads 1271123 A Comparative Study of the Proposed Models for the Components of the National Health Information System
Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi
Abstract:
National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.Keywords: National Health Information System, components of the NHIS, Lippeveld Model
Procedia PDF Downloads 4221122 Designing the Management Plan for Health Care (Medical) Wastes in the Cities of Semnan, Mahdishahr and Shahmirzad
Authors: Rasouli Divkalaee Zeinab, Kalteh Safa, Roudbari Aliakbar
Abstract:
Introduction: Medical waste can lead to the generation and transmission of many infectious and contagious diseases due to the presence of pathogenic agents, thereby necessitating the need for special management to collect, decontaminate, and finally dispose of such products. This study aimed to design a centralized health care (medical) waste management program for the cities of Semnan, Mahdishahr, and Shahmirzad. Methods: This descriptive-analytical study was conducted for six months in the cities of Semnan, Mahdishahr, and Shahmirzad. In this study, the quantitative and qualitative characteristics of the generated wastes were determined by taking samples from all medical waste production centers. Then, the equipment, devices, and machines required for separate collection of the waste from the production centers and for their subsequent decontamination were estimated. Next, the investment costs, current costs, and working capital required for collection, decontamination, and final disposal of the wastes were determined. Finally, the payment for proper waste management of each category of medical waste-producing centers was determined. Results: 1021 kilograms of medical waste are produced daily in the cities of Semnan, Mahdishahr, and Shahmirzad. It was estimated that a 1000-liter autoclave, a machine for collecting medical waste, four 60-liter bins, four 120-liter bins, and four 1200-liter bins were required for implementing the study plan. Also, the estimated total annual medical waste management costs for Semnan City were determined (23,283,903,720 Iranian Rials). Conclusion: The study results showed that establishing a proper management system for medical wastes generated in the three studied cities will cost between 334,280 and 1,253,715 Iranian Rials in fees for the medical centers. The findings of this study provided comprehensive data regarding medical wastes from the generation point to the landfill site, which is vital for the government and the private sector.Keywords: clinics, decontamination, management, medical waste
Procedia PDF Downloads 791121 Reduplication In Urdu-Hindi Nonsensical Words: An OT Analysis
Authors: Riaz Ahmed Mangrio
Abstract:
Reduplication in Urdu-Hindi affects all major word categories, particles, and even nonsensical words. It conveys a variety of meanings, including distribution, emphasis, iteration, adjectival and adverbial. This study will primarily discuss reduplicative structures of nonsensical words in Urdu-Hindi and then briefly look at some examples from other Indo-Aryan languages to introduce the debate regarding the same structures in them. The goal of this study is to present counter-evidence against Keane (2005: 241), who claims “the base in the cases of lexical and phrasal echo reduplication is always independently meaningful”. However, Urdu-Hindi reduplication derives meaningful compounds from nonsensical words e.g. gũ mgũ (A) ‘silent and confused’ and d̪əb d̪əb-a (N) ‘one’s fear over others’. This needs a comprehensive examination to see whether and how the various structures form patterns of a base-reduplicant relationship or, rather, they are merely sub lexical items joining together to form a word pattern of any grammatical category in content words. Another interesting theoretical question arises within the Optimality framework: in an OT analysis, is it necessary to identify one of the two constituents as the base and the other as reduplicant? Or is it best to consider this a pattern, but then how does this fit in with an OT analysis? This may be an even more interesting theoretical question. Looking for the solution to such questions can serve to make an important contribution. In the case at hand, each of the two constituents is an independent nonsensical word, but their echo reduplication is nonetheless meaningful. This casts significant doubt upon Keane’s (2005: 241) observation of some examples from Hindi and Tamil reduplication that “the base in cases of lexical and phrasal echo reduplication is always independently meaningful”. The debate on the point becomes further interesting when the triplication of nonsensical words in Urdu-Hindi e.g. aẽ baẽ ʃaẽ (N) ‘useless talk’ is also seen, which is equally important to discuss. The example is challenging to Harrison’s (1973) claim that only the monosyllabic verbs in their progressive forms reduplicate twice to result in triplication, which is not the case with the example presented. The study will consist of a thorough descriptive analysis of the data for the purpose of documentation, and then there will be OT analysis.Keywords: reduplication, urdu-hindi, nonsensical, optimality theory
Procedia PDF Downloads 761120 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing
Authors: Huan Ting Liao
Abstract:
In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning
Procedia PDF Downloads 261119 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1301118 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad
Abstract:
The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1
Procedia PDF Downloads 931117 Effects of Self-Management Programs on Blood Pressure Control, Self-Efficacy, Medication Adherence, and Body Mass Index among Older Adult Patients with Hypertension: Meta-Analysis of Randomized Controlled Trials
Authors: Van Truong Pham
Abstract:
Background: Self-management was described as a potential strategy for blood pressure control in patients with hypertension. However, the effects of self-management interventions on blood pressure, self-efficacy, medication adherence, and body mass index (BMI) in older adults with hypertension have not been systematically evaluated. We evaluated the effects of self-management interventions on systolic blood pressure (SBP) and diastolic blood pressure (DBP), self-efficacy, medication adherence, and BMI in hypertensive older adults. Methods: We followed the recommended guidelines of preferred reporting items for systematic reviews and meta-analyses. Searches in electronic databases including CINAHL, Cochrane Library, Embase, Ovid-Medline, PubMed, Scopus, Web of Science, and other sources were performed to include all relevant studies up to April 2019. Studies selection, data extraction, and quality assessment were performed by two reviewers independently. We summarized intervention effects as Hedges' g values and 95% confidence intervals (CI) using a random-effects model. Data were analyzed using Comprehensive Meta-Analysis software 2.0. Results: Twelve randomized controlled trials met our inclusion criteria. The results revealed that self-management interventions significantly improved blood pressure control, self-efficacy, medication adherence, whereas the effect of self-management on BMI was not significant in older adult patients with hypertension. The following Hedges' g (effect size) values were obtained: SBP, -0.34 (95% CI, -0.51 to -0.17, p < 0.001); DBP, -0.18 (95% CI, -0.30 to -0.05, p < 0.001); self-efficacy, 0.93 (95%CI, 0.50 to 1.36, p < 0.001); medication adherence, 1.72 (95%CI, 0.44 to 3.00, p=0.008); and BMI, -0.57 (95%CI, -1.62 to 0.48, p = 0.286). Conclusions: Self-management interventions significantly improved blood pressure control, self-efficacy, and medication adherence. However, the effects of self-management on obesity control were not supported by the evidence. Healthcare providers should implement self-management interventions to strengthen patients' role in managing their health care.Keywords: self-management, meta-analysis, blood pressure control, self-efficacy, medication adherence, body mass index
Procedia PDF Downloads 1281116 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator
Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty
Abstract:
Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state
Procedia PDF Downloads 2661115 Planing the Participation of Units Bound to Demand Response Programs with Regard to Ancillary Services in the PQ Power Market
Authors: Farnoosh Davarian
Abstract:
The present research focuses on organizing the cooperation of units constrained by demand response (DR) programs, considering ancillary services in the P-Q power market. Moreover, it provides a comprehensive exploration of the effects of demand reduction and redistribution across several predefined scenarios (in three pre-designed demand response programs, for example, ranging from 5% to 20%) on system voltage and losses in a smart distribution system (in the studied network, distributed energy resources (DERs) such as synchronous distributed generators and wind turbines offer their active and reactive power for the proposed market).GAMS, a specialized software for high-powered modeling, is used for optimizing linear, nonlinear, and integer programming challenges. GAMS modeling is separate from its solution method, which is a notable feature. Thus, by providing changes in the solver, it is possible to solve the model using various methods (linear, nonlinear, integer, etc.). Finally, the combined active and reactive market challenge in smart distribution systems, considering renewable distributed sources and demand response programs in GAMS, will be evaluated. The active and reactive power trading by the distribution company is carried out in the wholesale market. What is demanded is active power. By using the buy-back/payment program, it is possible for responsive loads or aggregators to participate in the market. The objective function of the proposed market is to minimize the price of active and reactive power for DERs and distribution companies and the penalty cost for CO2 emissions and the cost of the buy-back/payment program. In this research, the objective function is to minimize the cost of active and reactive power from distributed generation sources and distribution companies, the cost of carbon dioxide emissions, and the cost of the buy-back/payment program. The effectiveness of the proposed method has been evaluated in a case study.Keywords: consumer behavior, demand response, pollution cost, combined active and reactive market
Procedia PDF Downloads 91114 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process
Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani
Abstract:
Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process
Procedia PDF Downloads 340