Search results for: significant
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16092

Search results for: significant

3852 The Curse of Oil: Unpacking the Challenges to Food Security in the Nigeria's Niger Delta

Authors: Abosede Omowumi Babatunde

Abstract:

While the Niger Delta region satisfies the global thirst for oil, the inhabitants have not been adequately compensated for the use of their ancestral land. Besides, the ruthless exploitation and destruction of the natural environment upon which the inhabitants of the Niger Delta depend for their livelihood and sustenance by the activities of oil multinationals, pose major threats to food security in the region and by implication, Nigeria in general, Africa, and the world, given the present global emphasis on food security. This paper examines the effect of oil exploitation on household food security, identify key gaps in measures put in place to address the changes to livelihoods and food security and explore what should be done to improve the local people access to sufficient, safe and culturally acceptable food in the Niger Delta. Data is derived through interviews with key informants and Focus Group Discussions (FGDs) conducted with respondents in the local communities in the Niger Delta states of Delta, Bayelsa and Rivers as well as relevant extant studies. The threat to food security is one important aspect of the human security challenges in the Niger Delta which has received limited scholarly attention. In addition, successive Nigerian governments have not meaningfully addressed the negative impacts of oil-induced environmental degradation on traditional livelihoods given the significant linkages between environmental sustainability, livelihood security, and food security. The destructive impact of oil pollution on the farmlands, crops, economic trees, creeks, lakes, and fishing equipment is so devastating that the people can no longer engage in productive farming and fishing. Also important is the limited access to modern agricultural methods for fishing and subsistence farming as fishing and farming are done using mostly crude implements and traditional methods. It is imperative and urgent to take stock of the negative implications of the activities of oil multinationals for environmental and livelihood sustainability, and household food security in the Niger Delta.

Keywords: challenges, food security, Nigeria's Niger delta, oil

Procedia PDF Downloads 230
3851 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State

Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov

Abstract:

Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.

Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm

Procedia PDF Downloads 62
3850 The Beacon of Collective Hope: Mixed Method Study on the Participation of Indian Youth with Regard to Mass Demonstrations Fueled by Social Activism Media

Authors: Akanksha Lohmore, Devanshu Arya, Preeti Kapur

Abstract:

Rarely does the human mind look at the positive fallout of highly negative events. Positive psychology attempts to emphasize on the strengths and positives for human well-being. The present study examines the underpinning socio-cognitive factors of the protest movements regarding the gang rape case of December 16th, 2012 through the lens of positive psychology. A gamut of negative emotions came to the forum globally: of anger, shame, hatred, violence, death penalty for the perpetrators, amongst other equally strong. In relation to this incident, a number of questions can be raised. Can such a heinous crime have some positive inputs for contemporary society? What is it that has held people to protests for long even when they see faded lines of success in view? This paper explains the constant feeding of protests and continuation of movements by the robust model of Collective Hope by Snyder, a phenomenon unexplored by social psychologists. In this paper, mixed method approach was undertaken. Results confirmed the interaction of various socio-psychological factors that imitated the Snyders model of collective hope. Emergence of major themes was: Sense of Agency, Sense of Worthiness, Social Sharing and Common Grievances and Hope of Collective Efficacy. Statistical analysis (correlation and regression) showed significant relationship between media usage and occurrence of these themes among participants. Media-communication processes and educational theories for development of citizenship behavior can find implications from these results. Theory development as indicated by theorists working in the area of Social Psychology of Protests can be furthered by the direction of research.

Keywords: agency, collective, hope, positive psychology, protest, social media

Procedia PDF Downloads 333
3849 Effect of Mindfulness-Based Self-Care Training on Self-Esteem and Body Image Concern on Candidate Patients of Orthognathic Surgery

Authors: Hamide Azimi Lolaty, Fateme Alsadat Ghanipoor, Azar Ramzani, Reza Ali Mohammadpoor, Alireza Babaei

Abstract:

Background and Objective: Despite the merits behind orthognathic surgery, self-care training in such patients seems logical. The current research was performed pursuing the goal of outlining the effect of training mindfulness-based self-care on Self-Esteem (SE) and Body Image Concern (BIC) of orthognathic surgery candidate patients. Material and Methods: The present study was performed using a semi-experimental method with pre-and post-design in the control and intervention groups. The eligible patients to enter the Babol-based Shahid Beheshti Orthognathic Surgery Clinic were conveniently divided into two 25-person groups. The variables of Self-Esteem and Body Image Concern were measured before and after executing the eight 90-minute training sessions and in the follow-up period done three months after executing the intervention using Cooper Smith’s Self-Esteem Inventory (CSEI) and Body Image Concern Inventory (BICI). The data were analyzed using ANOVA and the independent t-test and using SPSS-26, the data were analyzed at a 0.05 level. Results: As a result of the intervention, the intervention group’s SE score critically changed on average from 25.4±7.31 in the pre-intervention to 31.16±7.05 in the post-intervention and to 40.45±3.51 in the follow-up period (P=0.01), the intervention group’s BIC score changed on average from 60.28±16.47 in the pre-intervention to 47.15±80.47 in the post-intervention and to 32.20 ± 10.73 in the follow-up period. This difference was meaningful (P=0.001). But due to time and the intervention interaction, the control group underwent this significant reduction with a delay. The study revealed the scores of the SE as 32± 6.84 and that of the BIC as 43.32±10.64 in the control group didn’t result in any meaningful statistical difference (P<0.05). Conclusion: Training mindfulness-based self-care exerts an effect on the SE and BIC of the patients undergoing orthognathic surgery. Therefore, it’s recommended to train mindfulness-based self-care for orthognathic surgery candidate patients.

Keywords: self-care, mindfulness, self-esteem, body image concern, orthognathic surgery

Procedia PDF Downloads 94
3848 Microglia Activity and Induction of Mechanical Allodynia after Mincle Receptor Ligand Injection in Rat Spinal Cord

Authors: Jihoon Yang, Jeong II Choi

Abstract:

Mincle is expressed in macrophages and is members of immunoreceptors induced after exposure to various stimuli and stresses. Mincle receptor activation promotes the production of these substances by increasing the transcription of inflammatory cytokines and chemokines. Cytokines, which play an important role in the initiation and maintenance of such inflammatory pain diseases, have a significant effect on sensory neurons in addition to their enhancement and inhibitory effects on immune and inflammatory cells as mediators of cell interaction. Glial cells in the central nervous system play a critical role in development and maintenance of chronic pain states. Microglia are tissue-resident macrophages in the central nervous system, and belong to a group of mononuclear phagocytes. In the central nervous system, mincle receptor is present in neurons and glial cells of the brain.This study was performed to identify the Mincle receptor in the spinal cord and to investigate the effect of Mincle receptor activation on nociception and the changes of microglia. Materials and Methods: C-type lectins(Mincle) was identified in spinal cord of Male Sprague–Dawley rats. Then, mincle receptor ligand (TDB), via an intrathecal catheter. Mechanical allodynia was measured using von Frey test to evaluate the effect of intrathecal injection of TDB. Result: The present investigation shows that the intrathecal administration of TDB in the rat produces a reliable and quantifiable mechanical hyperalgesia. In addition, The mechanical hyperalgesia after TDB injection gradually developed over time and remained until 10 days. Mincle receptor is identified in the spinal cord, mainly expressed in neuronal cells, but not in microglia or astrocyte. These results suggest that activation of mincle receptor pathway in neurons plays an important role in inducing activation of microglia and inducing mechanical allodynia.

Keywords: mincle, spinal cord, pain, microglia

Procedia PDF Downloads 139
3847 Phase Composition Analysis of Ternary Alloy Materials for Gas Turbine Applications

Authors: Mayandi Ramanathan

Abstract:

Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to high Turbine Entry Temperatures in the range of 1500 to 1600°C. The blades rotate at very high rotation rates and remove a significant amount of thermal power from the gas stream. At high temperatures, the major component failure mechanism is a creep. During its service over time under high thermal loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades and gas turbine components. The proposed advanced Ti alloy material needs a process that provides a strategic orientation of metallic ordering, uniformity in composition and high metallic strength. The chemical composition of the proposed Ti alloy material (25% Ta/(Al+Ta) ratio), unlike Ti-47Al-2Cr-2Nb, has less excess Al that could limit the service life of turbine blades. Properties and performance of Ti-47Al-2Cr-2Nb and Ti-6Al-4V materials will be compared with that of the proposed Ti alloy material to generalize the performance metrics of various gas turbine components. This paper will involve the summary of the effects of additive manufacturing and heat treatment process conditions on the changes in the phase composition, grain structure, lattice structure of the material, tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness at different temperatures. Based on these results, additive manufacturing and heat treatment process conditions will be optimized to fabricate turbine blade with Ti-43Al matrix alloyed with an optimized amount of refractory Ta metal. Improvement in service temperature of the turbine blades and corrosion resistance dependence on the coercivity of the alloy material will be reported. A correlation of phase composition and creep strain rate will also be discussed.

Keywords: high temperature materials, aerospace, specific strength, creep strain, phase composition

Procedia PDF Downloads 86
3846 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia

Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui

Abstract:

To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.

Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia

Procedia PDF Downloads 275
3845 Preparedness for Microbial Forensics Evidence Collection on Best Practice

Authors: Victor Ananth Paramananth, Rashid Muniginin, Mahaya Abd Rahman, Siti Afifah Ismail

Abstract:

Safety issues, scene protection, and appropriate evidence collection must be handled in any bio crime scene. There will be a scene or multi-scene to be cordoned for investigation in any bio-incident or bio crime event. Evidence collection is critical in determining the type of microbial or toxin, its lethality, and its source. As a consequence, from the start of the investigation, a proper sampling method is required. The most significant challenges for the crime scene officer would be deciding where to obtain samples, the best sampling method, and the sample sizes needed. Since there could be evidence in liquid, viscous, or powder shape at a crime scene, crime scene officers have difficulty determining which tools to use for sampling. To maximize sample collection, the appropriate tools for sampling methods are necessary. This study aims to assist the crime scene officer in collecting liquid, viscous, and powder biological samples in sufficient quantity while preserving sample quality. Observational tests on sample collection using liquid, viscous, and powder samples for adequate quantity and sample quality were performed using UV light in this research. The density of the light emission varies upon the method of collection and sample types. The best tools for collecting sufficient amounts of liquid, viscous, and powdered samples can be identified by observing UV light. Instead of active microorganisms, the invisible powder is used to assess sufficient sample collection during a crime scene investigation using various collection tools. The liquid, powdered and viscous samples collected using different tools were analyzed using Fourier transform infrared - attenuate total reflection (FTIR-ATR). FTIR spectroscopy is commonly used for rapid discrimination, classification, and identification of intact microbial cells. The liquid, viscous and powdered samples collected using various tools have been successfully observed using UV light. Furthermore, FTIR-ATR analysis showed that collected samples are sufficient in quantity while preserving their quality.

Keywords: biological sample, crime scene, collection tool, UV light, forensic

Procedia PDF Downloads 172
3844 Study on the Wave Dissipation Performance of Double-Cylinder and Double-Plate Floating Breakwater

Authors: Liu Bijin

Abstract:

Floating breakwaters have several advantages, including being environmentally friendly, easy to construct, and cost-effective regardless of water depth. They have a broad range of applications in coastal engineering. However, they face significant challenges due to the unstable effect of wave dissipation, structural vulnerability, and high mooring system requirements. This paper investigates the wave dissipation performance of a floating breakwater structure. The structure consists of double cylinders, double vertical plates, and horizontal connecting plates. The investigation is carried out using physical model tests and numerical simulation methods based on STAR-CCM+. This paper discusses the impact of wave elements, relative vertical plate heights, and relative horizontal connecting plate widths on the wave dissipation performance of the double-cylinder, double-plate floating breakwater (DCDPFB). The study also analyses the changes in local vorticity and velocity fields around the DCDPFB to determine the optimal structural dimensions. The study found that the relative width of the horizontal connecting plate, the relative height of the vertical plate, and the size of the semi-cylinder are the key factors affecting the wave dissipation performance of the DCDPFB. The transmittance coefficient is minimally affected by the wave height and the depth of water entry. The local vortex and velocity field formed around the DCDPFB are important factors for dissipating wave energy. The test section of the DCDPFB, constructed according to the relative optimal structural dimensions, showed good wave dissipation performance during offshore prototype tests. The test section of DCDPFB, constructed with optimal structural dimensions, exhibits excellent wave dissipation performance in offshore prototype tests.

Keywords: floating breakwater, wave dissipation performance, transmittance coefficient, model test

Procedia PDF Downloads 24
3843 Effect of Surfactant Level of Microemulsions and Nanoemulsions on Cell Viability

Authors: Sonal Gupta, Rakhi Bansal, Javed Ali, Reema Gabrani, Shweta Dang

Abstract:

Nanoemulsions (NEs) and microemulsions (MEs) have been an attractive tool for encapsulation of both hydrophilic and lipophillic actives. Both these systems are composed of oil phase, surfactant, co-surfactant and aqueous phase. Depending upon the application and intended use, both oil-in-water and water-in-oil emulsions can be designed. NEs are fabricated using high energy methods employing less percentage of surfactant as compared to MEs which are self assembled drug delivery systems. Owing to the nanometric size of the droplets these systems have been widely used to enhance solubility and bioavailability of natural as well as synthetic molecules. The aim of the present study is to assess the effect of % age of surfactants on cell viability of Vero cells (African Green Monkeys’ Kidney epithelial cells) via MTT assay. Green tea catechin (Polyphenon 60) loaded ME employing low energy vortexing and NE employing high energy ultrasonication were prepared using same excipients (labrasol as oil, cremophor EL as surfactant and glycerol as co-surfactant) however, the % age of oil and surfactant needed to prepare the ME was higher as compared to NE. These formulations along with their excipients (oilME=13.3%, SmixME=26.67%; oilNE=10%, SmixNE=13.52%) were added to Vero cells for 24 hrs. The tetrazolium dye, 3-(4,5-dimethylthia/ol-2-yl)-2,5-diphi-iiyltclrazolium bromide (MTT), is reduced by live cells and this reaction is used as the end point to evaluate the cytoxicity level of a test formulation. Results of MTT assay indicated that oil at different percentages exhibited almost equal cell viability (oilME ≅ oilNE) while surfactant mixture had a significant difference in the cell viability values (SmixME < SmixNE). Polyphenon 60 loaded ME and its PlaceboME showed higher toxicity as compared to Polyphenon 60 loaded NE and its PlaceboNE that can be attributed to the higher concentration of surfactants present in MEs. Another probable reason for high % cell viability of Polyphenon 60 loaded NE might be due to the effective release of Polyphenon 60 from NE formulation that helps in the sustenance of Vero cells.

Keywords: cell viability, microemulsion, MTT, nanoemulsion, surfactants, ultrasonication

Procedia PDF Downloads 407
3842 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis

Authors: Siddarth Kannan

Abstract:

Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.

Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief

Procedia PDF Downloads 107
3841 Role of Human Epididymis Protein 4 as a Biomarker in the Diagnosis of Ovarian Cancer

Authors: Amar Ranjan, Julieana Durai, Pranay Tanwar

Abstract:

Background &Introduction: Ovarian cancer is one of the most common malignant tumor in the female. 70% of the cases of ovarian cancer are diagnosed at an advanced stage. The five-year survival rate associated with ovarian cancer is less than 30%. The early diagnosis of ovarian cancer becomes a key factor in improving the survival rate of patients. Presently, CAl25 (carbohydrate antigen125) is used for the diagnosis and therapeutic monitoring of ovarian cancer, but its sensitivity and specificity is not ideal. The introduction of HE4, human epididymis protein 4 has attracted much attention. HE4 has a sensitivity and specificity of 72.9% and 95% for differentiating between benign and malignant adnexal masses, which is better than CA125 detection.  Methods: Serum HE4 and CA -125 were estimated using the chemiluminescence method. Our cases were 40 epithelial ovarian cancer, 9 benign ovarian tumor, 29 benign gynaecological diseases and 13 healthy individuals. This group include healthy woman those who have undergoing family planning and menopause-related medical consultations and they are negative for ovarian mass. Optimal cut off values for HE4 and CA125 were 55.89pmol/L and 40.25U/L respectively (determined by statistical analysis). Results: The level of HE4 was raised in all ovarian cancer patients (n=40) whereas CA125 levels were normal in 6/40 ovarian cancer patients, which were the cases of OC confirmed by histopathology. There is a significant decrease in the level of HE4 with comparison to CA125 in benign ovarian tumor cases. Both the levels of HE4 and CA125 were raised in the nonovarian cancer group, which includes cancer of endometrium and cervix. In the healthy group, HE4 was normal in all patients except in one case of the rudimentary horn, and the reason for this raised HE4 level is due to the incomplete development of uterus whereas CA125 was raised in 3 cases. Conclusions: Findings showed that the serum level of HE4 is an important indicator in the diagnosis of ovarian cancer, and it also distinguishes between benign and malignant pelvic masses. However, a combination of HE4 and CA125 panel will be extremely valuable in improving the diagnostic efficiency of ovarian cancer. These findings of our study need to be validated in the larger cohort of patients.

Keywords: human epididymis protein 4, ovarian cancer, diagnosis, benign lesions

Procedia PDF Downloads 109
3840 Preventing Factors for Innovation: The Case of Swedish Construction Small and Medium-Sized Local Companies towards a One-Stop-Shop Business Concept

Authors: Georgios Pardalis, Krushna Mahapatra, Brijesh Mainali

Abstract:

Compared to other sectors, the residential and service sector in Sweden is responsible for almost 40% of the national final energy use and faces great challenges towards achieving reduction of energy intensity. The one- and two-family (henceforth 'detached') houses, constituting 60% of the residential floor area and using 32 TWh for space heating and hot water purposes, offers significant opportunities for improved energy efficiency. More than 80% of those houses are more than 35 years of old and a large share of them need major renovations. However, the rate of energy renovations for such houses is significantly low. The renovation market is dominated by small and medium-sized local companies (SMEs), who mostly offer individual solutions. A one-stop-shop business framework, where a single actor collaborates with other actors and coordinates them to offer a full package for holistic renovations, may speed up the rate of renovation. Such models are emerging in some European countries. This paper aims to understand the willingness of the SMEs to adopt a one-stop-shop business framework. Interviews were conducted with 13 SMEs in Kronoberg county in Sweden, a geographic region known for its initiatives towards sustainability and energy efficiency. The examined firms seem reluctant to adopt one-stop-shop for nonce due to the perceived risks they see in such a business move and due to their characteristics, although they agree that such a move will advance their position in the market and their business volume. By using threat-rigidity and prospect theory, we illustrate how this type of companies can move from being reluctant to adopt one-stop-shop framework to its adoption. Additionally, with the use of behavioral theory, we gain deeper knowledge on those exact reasons preventing those firms from adopting the one-stop-shop framework.

Keywords: construction SMEs, innovation adoption, one-stop-shop, perceived risks

Procedia PDF Downloads 104
3839 Dynamic Behavior of the Nanostructure of Load-Bearing Biological Materials

Authors: Mahan Qwamizadeh, Kun Zhou, Zuoqi Zhang, Yong Wei Zhang

Abstract:

Typical load-bearing biological materials like bone, mineralized tendon and shell, are biocomposites made from both organic (collagen) and inorganic (biomineral) materials. This amazing class of materials with intrinsic internally designed hierarchical structures show superior mechanical properties with regard to their weak components from which they are formed. Extensive investigations concentrating on static loading conditions have been done to study the biological materials failure. However, most of the damage and failure mechanisms in load-bearing biological materials will occur whenever their structures are exposed to dynamic loading conditions. The main question needed to be answered here is: What is the relation between the layout and architecture of the load-bearing biological materials and their dynamic behavior? In this work, a staggered model has been developed based on the structure of natural materials at nanoscale and Finite Element Analysis (FEA) has been used to study the dynamic behavior of the structure of load-bearing biological materials to answer why the staggered arrangement has been selected by nature to make the nanocomposite structure of most of the biological materials. The results showed that the staggered structures will efficiently attenuate the stress wave rather than the layered structure. Furthermore, such staggered architecture is effectively in charge of utilizing the capacity of the biostructure to resist both normal and shear loads. In this work, the geometrical parameters of the model like the thickness and aspect ratio of the mineral inclusions selected from the typical range of the experimentally observed feature sizes and layout dimensions of the biological materials such as bone and mineralized tendon. Furthermore, the numerical results validated with existing theoretical solutions. Findings of the present work emphasize on the significant effects of dynamic behavior on the natural evolution of load-bearing biological materials and can help scientists to design bioinspired materials in the laboratories.

Keywords: load-bearing biological materials, nanostructure, staggered structure, stress wave decay

Procedia PDF Downloads 428
3838 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 218
3837 Coronary Artery Calcium Score and Statin Treatment Effect on Myocardial Infarction and Major Adverse Cardiovascular Event of Atherosclerotic Cardiovascular Disease: A Systematic Review and Meta-Analysis

Authors: Yusra Pintaningrum, Ilma Fahira Basyir, Sony Hilal Wicaksono, Vito A. Damay

Abstract:

Background: Coronary artery calcium (CAC) scores play an important role in improving prognostic accuracy and can be selectively used to guide the allocation of statin therapy for atherosclerotic cardiovascular disease outcomes and potentially associated with the occurrence of MACE (Major Adverse Cardiovascular Event) and MI (Myocardial Infarction). Objective: This systematic review and meta-analysis aim to analyze the findings of a study about CAC Score and statin treatment effect on MI and MACE risk. Methods: Search for published scientific articles using the PRISMA (Preferred Reporting, Items for Systematic Reviews and Meta-Analysis) method conducted on PubMed, Cochrane Library, and Medline databases published in the last 20 years on “coronary artery calcium” AND “statin” AND “cardiovascular disease” Further systematic review and meta-analysis using RevMan version 5.4 were performed based on the included published scientific articles. Results: Based on 11 studies included with a total of 1055 participants, we performed a meta-analysis and found that individuals with CAC score > 0 increased risk ratio of MI 8.48 (RR = 9.48: 95% CI: 6.22 – 14.45) times and MACE 2.48 (RR = 3.48: 95% CI: 2.98 – 4.05) times higher than CAC score 0 individual. Statin compared against non-statin treatment showed a statistically insignificant overall effect on the risk of MI (P = 0.81) and MACE (P = 0.89) in an individual with elevated CAC score 1 – 100 (P = 0.65) and > 100 (P = 0.11). Conclusions: This study found that an elevated CAC scores individual has a higher risk of MI and MACE than a non-elevated CAC score individual. There is no significant effect of statin compared against non-statin treatment to reduce MI and MACE in elevated CAC score individuals of 1 – 100 or > 100.

Keywords: coronary artery calcium, statin, cardiovascular disease, myocardial infarction, MACE

Procedia PDF Downloads 75
3836 International Peace and Security: a Study in the Light of the Provisions of the Charter of the United Nations

Authors: Djehich Mohamed Yousri

Abstract:

As a result of the destruction and devastation left by the two world wars, the international community worked to establish a global organization based on a contractual basis, in which the Security Council was entrusted with the task of working to maintain international peace and security, and to achieve this, the United Nations Charter assigned the latter a wide authority to adapt everything It would threaten international peace and security, although the examiner of the Charter of the United Nations does not find the slightest definition of the concept of international peace and security, although these two principles are among the basic principles that the Charter stipulated the necessity of achieving, and perhaps this was also what was in the opposite case for them. And by that, we mean cases of a threat to peace, a breach of it, or an act of aggression. These terms were not dealt with in the Charter in explanation and detail, leaving ample room for the Security Council to assess each of these cases separately, and perhaps this is due to the fact that the framers of the Charter intended to set a flexible standard. It does not restrict the authority of the Security Council to carry out the adjustment process on the one hand and, on the other hand, to allow and enable the Security Council to keep pace with new developments and threats to which international peace and security are exposed. There is no doubt that the concept of international peace and security has undergone significant changes during the 70-year period that followed the establishment of the international organization. After the threat to peace and security focused - in the first stage - on cases of war or the threat of war, what distinguishes the post- The new world order is the emergence of other challenges and threats that find their source in economic, social, humanitarian, and environmental instability. Perhaps this is what the member states of the Security Council indicated during the preparation of the Peace Agenda. The expansion of the concept of peace and security is what paved the way for some permanent states to use the Security Council to legitimize and implement their decisions and take the council as a tool to implement their foreign policy and punish states instead of maintaining international peace and security, which prompted some states and jurisprudence to call for the establishment of oversight of the decisions of the Council Security on the one hand, and amending the UN Charter to make it more expressive of the aspirations of the international community, referring to the obstacles that prevent this amendment.

Keywords: peace, security, united nations charter, security council, united nations organization

Procedia PDF Downloads 51
3835 Representational Conference Profile of Secondary Students in Understanding Selected Chemical Principles

Authors: Ryan Villafuerte Lansangan

Abstract:

Assessing students’ understanding in the microscopic level of an abstract subject like chemistry poses a challenge to teachers. Literature reveals that the use of representations serves as an essential avenue of measuring the extent of understanding in the discipline as an alternative to traditional assessment methods. This undertaking explored the representational competence profile of high school students from the University of Santo Tomas High School in understanding selected chemical principles and correlate this with their academic profile in chemistry based on their performance in the academic achievement examination in chemistry administered by the Center for Education Measurement (CEM). The common misconceptions of the students on the selected chemistry principles based on their representations were taken into consideration as well as the students’ views regarding their understanding of the role of chemical representations in their learning. The students’ level of representation task instrument consisting of the main lessons in chemistry with a corresponding scoring guide was prepared and utilized in the study. The study revealed that most of the students under study are unanimously rated as Level 2 (symbolic level) in terms of their representational competence in understanding the selected chemical principles through the use of chemical representations. Alternative misrepresentations were most observed on the students’ representations on chemical bonding concepts while the concept of chemical equation appeared to be the most comprehensible topic in chemistry for the students. Data implies that teachers’ representations play an important role in helping the student understand the concept in a microscopic level. Results also showed that the academic achievement in the chemistry of the students based on the standardized CEM examination has a significant association with the students’ representational competence. In addition, the students’ responses on the students’ views in chemical representations questionnaire evidently showed a good understanding of what a chemical representation or a mental model is by drawing a negative response that these tools should be an exact replica. Moreover, the students confirmed a greater appreciation that chemical representations are explanatory tools.

Keywords: chemical representations, representational competence, academic profile in chemistry, secondary students

Procedia PDF Downloads 383
3834 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 315
3833 Investigation of Correlation Between Radon Concentration and Metals in Produced Water from Oilfield Activities

Authors: Nacer Hamza

Abstract:

Naturally radiation exposure that present due to the cosmic ray or the naturel occurring radioactives materials(NORMs) that originated in the earth's crust and are present everywhere in the environment(1) , a significant concentration of NORMs reported in the produced water which comes out during the oil extraction process, so that the management of this produced water is a challenge for oil and gas companies which include either minimization of produced water which considered as the best way in the term of environment based in the fact that ,the lower water produced the lower cost in treating this water , recycling and reuse by reinjected produced water that fulfills some requirements to enhance oil recovery or disposal in the case that the produced water cannot be minimize or reuse. In the purpose of produced water management, the investigation of NORMs activity concentration present in it considered as the main step for more understanding of the radionuclide’s distribution. Many studies reported the present of NORMs in produced water and investigated the correlation between 〖Ra〗^226and the different metals present in produced water(2) including Cations and anions〖Na〗^+,〖Cl〗^-, 〖Fe〗^(2+), 〖Ca〗^(2+) . and lead, nickel, zinc, cadmium, and copper commonly exist as heavy metal in oil and gas field produced water(3). However, there are no real interesting to investigate the correlation between 〖Rn〗^222and the different metals exist in produced water. methods using, in first to measure the radon concentration activity in produced water samples is a RAD7 .RAD7 is a radiometer instrument based on the solid state detectors(4) which is a type of semi-conductor detector for alpha particles emitting from Rn and their progenies, in second the concentration of different metals presents in produced water measure using an atomic absorption spectrometry AAS. Then to investigate the correlation between the 〖Rn〗^222concentration activity and the metals concentration in produced water a statistical method is Pearson correlation analysis which based in the correlation coefficient obtained between the 〖Rn〗^222 and metals. Such investigation is important to more understanding how the radionuclides act in produced water based on this correlation with metals , in first due to the fact that 〖Rn〗^222decays through the sequence 〖Po〗^218, 〖Pb〗^214, 〖Bi〗^214, 〖Po〗^214, and〖Pb〗^210, those daughters are metals thus they will precipitate with metals present in produced water, secondly the short half-life of 〖Rn〗^222 (3.82 days) lead to faster precipitation of its progenies with metals in produced water.

Keywords: norms, radon concentration, produced water, heavy metals

Procedia PDF Downloads 127
3832 Evaluation of Traffic Noise Level: A Case Study in Residential Area of Ishbiliyah , Kuwait

Authors: Jamal Almatawah, Hamad Matar, Abdulsalam Altemeemi

Abstract:

The World Health Organization (WHO) has recognized environmental noise as harmful pollution that causes adverse psychosocial and physiologic effects on human health. The motor vehicle is considered to be one of the main source of noise pollution. It is a universal phenomenon, and it has grown to the point that it has become a major concern for both the public and policymakers. The aim of this paper, therefore, is to investigate the Traffic noise levels and the contributing factors that affect its level, such as traffic volume, heavy-vehicle Speed and other metrological factors in Ishbiliyah as a sample of a residential area in Kuwait. Three types of roads were selected in Ishbiliyah expressway, major arterial and collector street. The other source of noise that interferes the traffic noise has also been considered in this study. Traffic noise level is measured and analyzed using the Bruel & Kjaer outdoor sound level meter 2250-L (2250 Light). The Count-Cam2 Video Camera has been used to collect the peak and off-peak traffic count. Ambient Weather WM-5 Handheld Weather Station is used for metrological factors such as temperature, humidity and wind speed. Also, the spot speed was obtained using the radar speed: Decatur Genesis model GHD-KPH. All the measurement has been detected at the same time (simultaneously). The results showed that the traffic noise level is over the allowable limit on all types of roads. The average equivalent noise level (LAeq) for the Expressway, Major arterial and Collector Street was 74.3 dB(A), 70.47 dB(A) and 60.84 dB(A), respectively. In addition, a Positive Correlation coefficient between the traffic noise versus traffic volume and between traffic noise versus 85th percentile speed was obtained. However, there was no significant relation and Metrological factors. Abnormal vehicle noise due to poor maintenance or user-enhanced exhaust noise was found to be one of the highest factors that affected the overall traffic noise reading.

Keywords: traffic noise, residential area, pollution, vehicle noise

Procedia PDF Downloads 40
3831 Effect of Crown Gall and Phylloxera Resistant Rootstocks on Grafted Vitis Vinifera CV. Sultana Grapevine

Authors: Hassan Mahmoudzadeh

Abstract:

The bacterium of Agrobacterium vitis causes crown and root gall disease, an important disease of grapevine, Vitis vinifera L. Also, Phylloxera is one of the most important pests in viticulture. Grapevine rootstocks were developed to provide increased resistance to soil-borne pests and diseases, but rootstock effects on some traits remain unclear. The interaction between rootstock, scion and environment can induce different responses to the grapevine physiology. 'Sultsna' (Vitis vinifera L.) is one of the most valuable raisin grape cultivars in Iran. Thus, the aim of this study was to determine the rootstock effect on the growth characteristics and yield components and quality of 'Sultana' grapevine grown in the Urmia viticulture region. The experimental design was completely randomized blocks, with four treatments, four replicates and 10 vines per plot. The results show that all variables evaluated were significantly affected by the rootstock. The Sultana/110R and Sultana/Nazmieh were among other combinations influenced by the year and had a higher significant yield/vine (13.25 and 12.14, respectively). Indeed, they were higher than that of Sultana/5BB (10.56 kg/vine) and Sultana/Spota (10.25 kg/vine). The number of clusters per burst bud and per vine and the weight of clusters were affected by the rootstock as well. Pruning weight/vine, yield/pruning weight, leaf area/vine and leaf area index are variables related to the physiology of grapevine, which was also affected by the rootstocks. In general, rootstocks had adapted well to the environment where the experiment was carried out, giving vigor and high yield to Sultana grapevine, which means that they may be used by grape growers in this region. In sum, the study found the best rootstocks for 'Sultana' to be Nazmieh and 110R in terms of root and shoot growth. However, the choice of the right rootstock depends on various aspects, such as those related to soil characteristics, climate conditions, grape varieties, and even clones, and production purposes.

Keywords: grafting, vineyards, grapevine, succeptability

Procedia PDF Downloads 84
3830 The Relationship of Lean Management Principles with Lean Maturity Levels: Multiple Case Study in Manufacturing Companies

Authors: Alexandre D. Ferraz, Dario H. Alliprandini, Mauro Sampaio

Abstract:

Companies and other institutions are constantly seeking better organizational performance and greater competitiveness. In order to fulfill this purpose, there are many tools, methodologies and models for increasing performance. However, the Lean Management approach seems to be the most effective in terms of achieving a significant improvement in productivity relatively quickly. Although Lean tools are relatively easy to understand and implement in different contexts, many organizations are not able to transform themselves into 'Lean companies'. Most of the efforts in its implementation have shown single benefits, failing to achieve the desired impact on the performance of the overall enterprise system. There is also a growing perception of the importance of management in Lean transformation, but few studies have empirically investigated and described the 'Lean Management'. In order to understand more clearly the ideas that guide Lean Management and its influence on the maturity level of the production system, the objective of this research is analyze the relationship between the Lean Management principles and the Lean maturity level in the organizations. The research also analyzes the principles of Lean Management and its relationship with the 'Lean culture' and the results obtained. The research was developed using the case study methodology. Three manufacturing units of a German multinational company from industrial automation segment, located in different countries were studied, in order to have a better comparison between the practices and the level of maturity in the implementation. The primary source of information was the application of a research questionnaire based on the theoretical review. The research showed that higher the level of Lean Management principles, higher are the Lean maturity level, the Lean culture level, and the level of Lean results obtained in the organization. The research also showed that factors such as time for application of Lean concepts and company size were not determinant for the level of Lean Management principles and, consequently, for the level of Lean maturity in the organization. The characteristics of the production system showed much more influence in different evaluated aspects. The present research also left recommendations for the managers of the plants analyzed and suggestions for future research.

Keywords: lean management, lean principles, lean maturity level, lean manufacturing

Procedia PDF Downloads 117
3829 Comparative Analysis of Benzene, Toluene, Ethylbenzene, and Xylene Concentrations at Roadside and Urban Background Sites in Leicester and Lagos Using Thermal Desorption-Gas Chromatography-Mass Spectrometry

Authors: Emmanuel Bernard, Rebecca L. Cordell, Akeem A. Abayomi, Rose Alani, Paul S. Monks

Abstract:

This study investigates the prevalence and extent of BTEX (Benzene, Toluene, Ethylbenzene, and Xylene) contamination in Leicester, United Kingdom, and Lagos, Nigeria, through field measurements at roadside (RS) and urban background (UB) sites. Using thermal desorption gas chromatography mass spectrometry (TD-GC-MS), BTEX concentrations were quantified. In Leicester, the average RS concentration was 24.9 ± 8.9 μg/m³, and the UB concentration was 12.7 ± 5.7 μg/m³. In Lagos, the RS concentration was significantly higher at 106 ± 39.3 μg/m³, and the UB concentration was 20.1 ± 8.9 μg/m³. The RS concentration in Lagos was approximately 4.3 times higher than in Leicester, while the UB concentration was about 1.6 times higher. These disparities are attributed to differences in road infrastructure, traffic regulation compliance, fuel and oil quality, and local activities. In Leicester, the highest UB concentration (20.5 ± 1.7 μg/m³) was at Knighton Village, near the heavily polluted RS Wigston roundabout. In Lagos, the highest concentration (172.1 ± 12.2 μg/m³) was at Ojuelegba, a major transportation hub. Correlation analysis revealed strong positive relationships between the concentrations of BTEX compounds in both cities, suggesting common sources such as vehicular emissions and industrial activities. The ratios of toluene to benzene (T:B) and m/p xylene to ethylbenzene (m/p X:E) were analysed to infer source contributions and the photochemical age of air masses. The T:B ratio in Leicester ranged from 0.44 to 0.71, while in Lagos, it ranged from 1.36 to 2.17. The m/p X:E ratio in Leicester ranged from 2.11 to 2.19, like other UK cities, while in Lagos, it ranged from 1.65 to 2.32, indicating relatively fresh emissions. This study highlights significant differences in BTEX concentrations between Leicester and Lagos, emphasizing the need for tailored pollution control strategies to address the specific sources and conditions in different urban environments.

Keywords: BTEX contamination, urban air quality, thermal desorption GC-MS, roadside emissions, urban background sites, vehicular emissions, pollution control strategies

Procedia PDF Downloads 15
3828 Combined Effect of Roughness and Suction on Heat Transfer in a Laminar Channel Flow

Authors: Marzieh Khezerloo, Lyazid Djenidi

Abstract:

Owing to wide range of the micro-device applications, the problems of mixing at small scales is of significant interest. Also, because most of the processes produce heat, it is needed to develop and implement strategies for heat removal in these devices. There are many studies which focus on the effect of roughness or suction on heat transfer performance, separately, although it would be useful to take advantage of these two methods to improve heat transfer performance. Unfortunately, there is a gap in this area. The present numerical study is carried to investigate the combined effects of roughness and wall suction on heat transfer performance of a laminar channel flow; suction is applied on the top and back faces of the roughness element, respectively. The study is carried out for different Reynolds numbers, different suction rates, and various locations of suction area on the roughness. The flow is assumed two dimensional, incompressible, laminar, and steady state. The governing Navier-Stokes equations are solved using ANSYS-Fluent 18.2 software. The present results are tested against previous theoretical results. The results show that by adding suction, the local Nusselt number is enhanced in the channel. In addition, it is shown that by applying suction on the bottom section of the roughness back face, one can reduce the thickness of thermal boundary layer, which leads to an increase in local Nusselt number. This indicates that suction is an effective means for improving the heat transfer rate (suction by controls the thickness of thermal boundary layer). It is also shown that the size and intensity of vortical motion behind the roughness element, decreased with an increasing suction rate, which leads to higher local Nusselt number. So, it can be concluded that by using suction, strategically located on the roughness element, one can control both the recirculation region and the heat transfer rate. Further results will be presented at the conference for coefficient of drag and the effect of adding more roughness elements.

Keywords: heat transfer, laminar flow, numerical simulation, roughness, suction

Procedia PDF Downloads 97
3827 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis

Authors: Alexander A. Tokmakov

Abstract:

Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.

Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins

Procedia PDF Downloads 389
3826 Identifying Artifacts in SEM-EDS of Fouled RO Membranes Used for the Treatment of Brackish Groundwater Through Raman and ICP-MS Analysis

Authors: Abhishek Soti, Aditya Sharma, Akhilendra Bhushan Gupta

Abstract:

Fouled reverse osmosis membranes are primarily characterized by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectrometer (EDS) for a detailed investigation of foulants; however, this has severe limitations on several accounts. Apart from inaccuracy in spectral properties and inevitable interferences and interactions between sample and instrument, misidentification of elements due to overlapping peaks is a significant drawback of EDS. This paper discusses this limitation by analyzing fouled polyamide RO membranes derived from community RO plants of Rajasthan treating brackish water via a combination of results obtained from EDS and Raman spectroscopy and cross corroborating with ICP-MS analysis of water samples prepared by dissolving the deposited salts. The anomalous behavior of different morphic forms of CaCO₃ in aqueous suspensions tends to introduce false reporting of the presence of certain heavy metals and rare earth metals in the scales of the fouled RO membranes used for treating brackish groundwater when analyzed using the commonly adopted techniques like SEM-EDS or Raman spectrometry. Peaks of CaCO₃ reflected in EDS spectra of the membrane were found to be misinterpreted as Scandium due to the automatic assignment of elements by the software. Similarly, the morphic forms merged with the dominant peak of CaCO₃ might be reflected as a single peak of Molybdenum in the Raman spectrum. A subsequent ICP-MS analysis of the deposited salts showed that both Sc and Mo were below detectable levels. It is always essential to cross-confirm the results through a destructive analysis method to avoid such interferences. It is further recommended to study different morphic forms of CaCO₃ scales, as they exhibit anomalous properties like reverse solubility with temperature and hence altered precipitation tendencies, for an accurate description of the composition of scales, which is vital for the smooth functioning of RO systems.

Keywords: reverse osmosis, foulant analysis, groundwater, EDS, artifacts

Procedia PDF Downloads 66
3825 The Impact of Cryptocurrency Classification on Money Laundering: Analyzing the Preferences of Criminals for Stable Coins, Utility Coins, and Privacy Tokens

Authors: Mohamed Saad, Huda Ismail

Abstract:

The purpose of this research is to examine the impact of cryptocurrency classification on money laundering crimes and to analyze how the preferences of criminals differ according to the type of digital currency used. Specifically, we aim to explore the roles of stablecoins, utility coins, and privacy tokens in facilitating or hindering money laundering activities and to identify the key factors that influence the choices of criminals in using these cryptocurrencies. To achieve our research objectives, we used a dataset for the most highly traded cryptocurrencies (32 currencies) that were published on the coin market cap for 2022. In addition to conducting a comprehensive review of the existing literature on cryptocurrency and money laundering, with a focus on stablecoins, utility coins, and privacy tokens, Furthermore, we conducted several Multivariate analyses. Our study reveals that the classification of cryptocurrency plays a significant role in money laundering activities, as criminals tend to prefer certain types of digital currencies over others, depending on their specific needs and goals. Specifically, we found that stablecoins are more commonly used in money laundering due to their relatively stable value and low volatility, which makes them less risky to hold and transfer. Utility coins, on the other hand, are less frequently used in money laundering due to their lack of anonymity and limited liquidity. Finally, privacy tokens, such as Monero and Zcash, are increasingly becoming a preferred choice among criminals due to their high degree of privacy and untraceability. In summary, our study highlights the importance of understanding the nuances of cryptocurrency classification in the context of money laundering and provides insights into the preferences of criminals in using digital currencies for illegal activities. Based on our findings, our recommendation to the policymakers is to address the potential misuse of cryptocurrencies for money laundering. By implementing measures to regulate stable coins, strengthening cross-border cooperation, fostering public-private partnerships, and increasing cooperation, policymakers can help prevent and detect money laundering activities involving digital currencies.

Keywords: crime, cryptocurrency, money laundering, tokens.

Procedia PDF Downloads 65
3824 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System

Authors: Min Hae Song, Jooyong Park

Abstract:

Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.

Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format

Procedia PDF Downloads 150
3823 Examining the Missing Feedback Link in Environmental Kuznets Curve Hypothesis

Authors: Apra Sinha

Abstract:

The inverted U-shaped Environmental Kuznets curve (EKC) demonstrates(pollution-income relationship)that initially the pollution and environmental degradation surpass the level of income per capita; however this trend reverses since at the higher income levels, economic growth initiates environmental upgrading. However, what effect does increased environmental degradation has on growth is the missing feedback link which has not been addressed in the EKC hypothesis. This paper examines the missing feedback link in EKC hypothesis in Indian context by examining the casual association between fossil fuel consumption, carbon dioxide emissions and economic growth for India. Fossil fuel consumption here has been taken as a proxy of driver of economic growth. The casual association between the aforementioned variables has been analyzed using five interventions namely 1) urban development for which urbanization has been taken proxy 2) industrial development for which industrial value added has been taken proxy 3) trade liberalization for which sum of exports and imports as a share of GDP has been taken as proxy 4)financial development for which a)domestic credit to private sector and b)net foreign assets has been taken as proxies. The choice of interventions for this study has been done keeping in view the economic liberalization perspective of India. The main aim of the paper is to investigate the missing feedback link for Environmental Kuznets Curve Hypothesis before and after incorporating the intervening variables. The period of study is from 1971 to 2011 as it covers pre and post liberalization era in India. All the data has been taken from World Bank country level indicators. The Johansen and Juselius cointegration testing methodology and Error Correction based Granger causality have been applied on all the variables. The results clearly show that out of five interventions, only in two interventions the missing feedback link is being addressed. This paper can put forward significant policy implications for environment protection and sustainable development.

Keywords: environmental Kuznets curve hypothesis, fossil fuel consumption, industrialization, trade liberalization, urbanization

Procedia PDF Downloads 227