Search results for: unitary response function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9602

Search results for: unitary response function

962 OnabotulinumtoxinA Injection for Glabellar Frown Lines as an Adjunctive Treatment for Depression

Authors: I. Witbooi, J. De Smidt, A. Oelofse

Abstract:

Negative emotions that are common in depression are coupled with the activation of the corrugator supercilli and procerus muscles in the glabellar region of the face. This research investigated the impact of OnabotulinumtoxinA (BOTOX) in the improvement of emotional states in depressed subjects by relaxing the mentioned muscles. The aim of the study was to investigate the effectiveness of BOTOX treatment for glabellar frown lines as an adjunctive therapy for Major Depressive Disorder (MDD) and to improve the quality of life and self-esteem of the subjects. It is hypothesized that BOTOX treatment for glabellar frown lines reduces depressive symptoms significantly and therefore augment conventional antidepressant medication. Forty-five (45) subjects diagnosed with MDD were assigned to a treatment (n = 15), placebo (n = 15), and control (n = 15) group. The treatment group received BOTOX injection, while the placebo group received saline injection into the Procerus and Corrugator supercilli muscles with follow-up visits every 3 weeks (weeks 3, 6 and 12 respectively). The control group received neither BOTOX nor saline injections and were only interviewed again on the 12th week. To evaluate the effect of BOTOX treatment in the glabellar region on depressive symptoms, the Montgomery-Asberg Depression Rating (MADRS) scale and the Beck Depression Inventory (BDI) were used. The Quality of Life Enjoyment and Satisfaction Questionnaire-Short Form (Q-LES-Q-SF) and Rosenberg Self-Esteem Scale (RSES) were used in the assessment of self-esteem and quality of life. Participants were followed up for a 12 week period. The expected primary outcome measure is the response to treatment, and it is defined as a ≥ 50% reduction in MADRS score from baseline. Other outcome measures include a clinically significant decrease in BDI scores and the increase in quality of life and self-esteem respectively. Initial results show a clear trend towards such differences. Results showed trends towards expected differences. Patients in the Botox group had a mean MADRS score of 14.0 at 3 weeks compared to 20.3 of the placebo group. This trend was still visible at 6 weeks with the Botox and placebo group scoring an average of 10 vs. 18 respectively. The mean difference in MDRS scores from baseline to 3 weeks were 9.3 and 2.0 for the Botox and placebo group respectively. Similarly, the BDI scores were lower in the Botox group (17.25) compared to the placebo group (19.43). The two self-esteem questionnaires showed expected results at this stage with the RSES 19.1 in the Botox group compared to 18.6 in the placebo group. Similarly, the Botox patients had a higher score for the Q-LES-Q-SF of 49.2 compared to 46.1 for the placebo group. Conclusions: Initial results clearly demonstrated that the use of Botox had positive effects on both scores of depressions and that of self-esteem when compared to a placebo group.

Keywords: adjunctive therapy, depression, glabellar area, OnabotulinumtoxinA

Procedia PDF Downloads 124
961 Microbiological Profile of UTI along with Their Antibiotic Sensitivity Pattern with Special Reference to Nitrofurantoin

Authors: Rupinder Bakshi, Geeta Walia, Anita Gupta

Abstract:

Introduction: Urinary tract infections (UTI) are considered to be one of the most common bacterial infections with an estimated annual global incidence of 150 million. Antimicrobial drug resistance is one of the major threats due to widespread usage of uncontrolled antibiotics. Materials and Methods: A total number of 9149 urine samples were collected from R.H Patiala and processed in the Department of Microbiology G.M.C Patiala. Urine samples were inoculated on MacConkey’s and blood agar plates by using calibrated loop delivering 0.001 ml of sample and incubated at 37 °C for 24 hrs. The organisms were identified by colony characters, gram’s staining and biochemical reactions. Antimicrobial susceptibility of the isolates was determined against various antimicrobial agents (Hi – Media Mumbai India) by Kirby-Bauer disk diffusion method on Muller Hinton agar plates. Results: Maximum patients were in the age group of 21-30 yrs followed by 31-40 yrs. Males (34%) are less prone to urinary tract infections than females (66%). Out of 9149 urine sample, the culture was positive in 25% (2290) samples. Esch. coli was the most common isolate 60.3% (n = 1378) followed by Klebsiella pneumoniae 13.5% (n = 310), Proteus spp. 9% (n = 209), Staphylococcus aureus 7.6 % (n = 173), Pseudomonas aeruginosa 3.7% (n = 84), Citrobacter spp. 3.1 % (70), Staphylococcus saprophyticus 1.8 % (n = 142), Enterococcus faecalis 0.8%(n=19) and Acinetobacter spp. 0.2%(n=5). Gram negative isolates showed higher sensitivity towards, Piperacillin +Tazobactum (67%), Amikacin (80%), Nitrofurantoin (82%), Aztreonam (100%), Imipenem (100%) and Meropenam (100%) while gram positive showed good response towards Netilmicin (69%), Nitrofurantoin (79%), Linezolid (98%), Vancomycin (100%) and Teicoplanin (100%). 465 (23%) isolates were resistant to Penicillins, 1st generation and 2nd generation Cehalosporins which were further tested by double disk approximation test and combined disk method for ESBL production. Out of 465 isolates, 375 were ESBLs consisting of n 264 (70.6%) Esch.coli and 111 (29.4%) Klebsiella pneumoniae. Susceptibility of ESBL producers to Imipenem, Nitrofurantoin and Amikacin were found to be 100%, 76%, and 75% respectively. Conclusion: Uropathogens are increasingly showing resistance to many antibiotics making empiric management of outpatients UTIs challenging. Ampicillin, Cotrimoxazole, and Ciprofloxacin should not be used in empiric treatment. Nitrofurantoin could be used in lower urinary tract infection. Knowledge of uropathogens and their antimicrobial susceptibility pattern in a geographical region will help inappropriate and judicious antibiotic usage in a health care setup.

Keywords: Urinary Tract Infection, UTI, antibiotic susceptibility pattern, ESBL

Procedia PDF Downloads 330
960 Impact of Reproductive Technologies on Women's Lives in New Delhi: A Study from Feminist Perspective

Authors: Zairunisha

Abstract:

This paper is concerned with the ways in which Assisted Reproductive Technologies (ARTs) affect women’s lives and perceptions regarding their infertility, contraception and reproductive health. Like other female animals, nature has ordained human female with the biological potential of procreation and becoming mother. However, during the last few decades, this phenomenal disposition of women has become a technological affair to achieve fertility and contraception. Medical practices in patriarchal societies are governed by male scientists, technical and medical professionals who try to control women as procreator instead of providing them choices. The use of ARTs presents innumerable waxed ethical questions and issues such as: the place and role of a child in a woman’s life, freedom of women to make their choices related to use of ARTs, challenges and complexities women face at social and personal levels regarding use of ARTs, effect of ARTs on their life as mothers and other relationships. The paper is based on a survey study to explore and analyze the above ethical issues arising from the use of Assisted Reproductive Technologies (ARTs) by women in New Delhi, the capital of India. A rapid rate of increase in fertility clinics has been noticed recently. It is claimed that these clinics serve women by using ARTs procedures for infertile couples and individuals who want to have child or terminate a pregnancy. The study is an attempt to articulate a critique of ARTs from a feminist perspective. A qualitative feminist research methodology has been adopted for conducting the survey study. An attempt has been made to identify the ways in which a woman’s life is affected in terms of her perceptions, apprehensions, choices and decisions regarding new reproductive technologies. A sample of 18 women of New Delhi was taken to conduct in-depth interviews to investigate their perception and response concerning the use of ARTs with a focus on (i) successful use of ARTs, (ii) unsuccessful use of ARTs, (iii) use of ARTs in progress with results yet to be known. The survey was done to investigate the impact of ARTs on women’s physical, emotional, psychological conditions as well as on their social relations and choices. The complexities and challenges faced by women in the voluntary and involuntary (forced) use of ARTs in Delhi have been illustrated. A critical analysis of interviews revealed that these technologies are used and developed for making profits at the cost of women’s lives through which economically privileged women and individuals are able to purchase services from lesser ones. In this way, the amalgamation of technology and cultural traditions are redefining and re-conceptualising the traditional patterns of motherhood, fatherhood, kinship and family relations within the realm of new ways of reproduction introduced through the use of ARTs.

Keywords: reproductive technologies, infertilities, voluntary, involuntary

Procedia PDF Downloads 363
959 De novo Transcriptome Assembly of Lumpfish (Cyclopterus lumpus L.) Brain Towards Understanding their Social and Cognitive Behavioural Traits

Authors: Likith Reddy Pinninti, Fredrik Ribsskog Staven, Leslie Robert Noble, Jorge Manuel de Oliveira Fernandes, Deepti Manjari Patel, Torstein Kristensen

Abstract:

Understanding fish behavior is essential to improve animal welfare in aquaculture research. Behavioral traits can have a strong influence on fish health and habituation. To identify the genes and biological pathways responsible for lumpfish behavior, we performed an experiment to understand the interspecies relationship (mutualism) between the lumpfish and salmon. Also, we tested the correlation between the gene expression data vs. observational/physiological data to know the essential genes that trigger stress and swimming behavior in lumpfish. After the de novo assembly of the brain transcriptome, all the samples were individually mapped to the available lumpfish (Cyclopterus lumpus L.) primary genome assembly (fCycLum1.pri, GCF_009769545.1). Out of ~16749 genes expressed in brain samples, we found 267 genes to be statistically significant (P > 0.05) found only in odor and control (1), model and control (41) and salmon and control (225) groups. However, genes with |LogFC| ≥0.5 were found to be only eight; these are considered as differentially expressed genes (DEG’s). Though, we are unable to find the differential genes related to the behavioral traits from RNA-Seq data analysis. From the correlation analysis, between the gene expression data vs. observational/physiological data (serotonin (5HT), dopamine (DA), 3,4-Dihydroxyphenylacetic acid (DOPAC), 5-hydroxy indole acetic acid (5-HIAA), Noradrenaline (NORAD)). We found 2495 genes found to be significant (P > 0.05) and among these, 1587 genes are positively correlated with the Noradrenaline (NORAD) hormone group. This suggests that Noradrenaline is triggering the change in pigmentation and skin color in lumpfish. Genes related to behavioral traits like rhythmic, locomotory, feeding, visual, pigmentation, stress, response to other organisms, taxis, dopamine synthesis and other neurotransmitter synthesis-related genes were obtained from the correlation analysis. In KEGG pathway enrichment analysis, we find important pathways, like the calcium signaling pathway and adrenergic signaling in cardiomyocytes, both involved in cell signaling, behavior, emotion, and stress. Calcium is an essential signaling molecule in the brain cells; it could affect the behavior of fish. Our results suggest that changes in calcium homeostasis and adrenergic receptor binding activity lead to changes in fish behavior during stress.

Keywords: behavior, De novo, lumpfish, salmon

Procedia PDF Downloads 161
958 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 219
957 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 189
956 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 46
955 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 99
954 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Eva Laryea, Clement Yeboah Authors

Abstract:

A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety

Procedia PDF Downloads 49
953 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 126
952 Exploring the Differences between Self-Harming and Suicidal Behaviour in Women with Complex Mental Health Needs

Authors: Sophie Oakes-Rogers, Di Bailey, Karen Slade

Abstract:

Female offenders are a uniquely vulnerable group, who are at high risk of suicide. Whilst the prevention of self-harm and suicide remains a key global priority, we need to better understand the relationship between these challenging behaviours that constitute a pressing problem, particularly in environments designed to prioritise safety and security. Method choice is unlikely to be random, and is instead influenced by a range of cultural, social, psychological and environmental factors, which change over time and between countries. A key aspect of self-harm and suicide in women receiving forensic care is the lack of free access to methods. At a time where self-harm and suicide rates continue to rise internationally, understanding the role of these influencing factors and the impact of current suicide prevention strategies on the use of near-lethal methods is crucial. This poster presentation will present findings from 25 interviews and 3 focus groups, which enlisted a Participatory Action Research approach to explore the differences between self-harming and suicidal behavior. A key element of this research was using the lived experiences of women receiving forensic care from one forensic pathway in the UK, and the staffs who care for them, to discuss the role of near-lethal self-harm (NLSH). The findings and suggestions from the lived accounts of the women and staff will inform a draft assessment tool, which better assesses the risk of suicide based on the lethality of methods. This tool will be the first of its kind, which specifically captures the needs of women receiving forensic services. Preliminary findings indicate women engage in NLSH for two key reasons and is determined by their history of self-harm. Women who have a history of superficial non-life threatening self-harm appear to engage in NLSH in response to a significant life event such as family bereavement or sentencing. For these women, suicide appears to be a realistic option to overcome their distress. This, however, differs from women who appear to have a lifetime history of NLSH, who engage in such behavior in a bid to overcome the grief and shame associated with historical abuse. NLSH in these women reflects a lifetime of suicidality and indicates they pose the greatest risk of completed suicide. Findings also indicate differences in method selection between forensic provisions. Restriction of means appears to play a role in method selection, and findings suggest it causes method substitution. Implications will be discussed relating to the screening of female forensic patients and improvements to the current suicide prevention strategies.

Keywords: forensic mental health, method substitution, restriction of means, suicide

Procedia PDF Downloads 164
951 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 59
950 Innovating Electronics Engineering for Smart Materials Marketing

Authors: Muhammad Awais Kiani

Abstract:

The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.

Keywords: electronics engineering, smart materials, marketing, power management

Procedia PDF Downloads 48
949 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study

Authors: Scott Snyder

Abstract:

Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.

Keywords: alternative approaches to grading, grades, higher education, letter grades

Procedia PDF Downloads 86
948 Ultra-Sensitive Point-Of-Care Detection of PSA Using an Enzyme- and Equipment-Free Microfluidic Platform

Authors: Ying Li, Rui Hu, Shizhen Chen, Xin Zhou, Yunhuang Yang

Abstract:

Prostate cancer is one of the leading causes of cancer-related death among men. Prostate-specific antigen (PSA), a specific product of prostatic epithelial cells, is an important indicator of prostate cancer. Though PSA is not a specific serum biomarker for the screening of prostate cancer, it is recognized as an indicator for prostate cancer recurrence and response to therapy for patient’s post-prostatectomy. Since radical prostatectomy eliminates the source of PSA production, serum PSA levels fall below 50 pg/mL, and may be below the detection limit of clinical immunoassays (current clinical immunoassay lower limit of detection is around 10 pg/mL). Many clinical studies have shown that intervention at low PSA levels was able to improve patient outcomes significantly. Therefore, ultra-sensitive and precise assays that can accurately quantify extremely low levels of PSA (below 1-10 pg/mL) will facilitate the assessment of patients for the possibility of early adjuvant or salvage treatment. Currently, the commercially available ultra-sensitive ELISA kit (not used clinically) can only reach a detection limit of 3-10 pg/mL. Other platforms developed by different research groups could achieve a detection limit as low as 0.33 pg/mL, but they relied on sophisticated instruments to get the final readout. Herein we report a microfluidic platform for point-of-care (POC) detection of PSA with a detection limit of 0.5 pg/mL and without the assistance of any equipment. This platform is based on a previously reported volumetric-bar-chart chip (V-Chip), which applies platinum nanoparticles (PtNPs) as the ELISA probe to convert the biomarker concentration to the volume of oxygen gas that further pushes the red ink to form a visualized bar-chart. The length of each bar is used to quantify the biomarker concentration of each sample. We devised a long reading channel V-Chip (LV-Chip) in this work to achieve a wide detection window. In addition, LV-Chip employed a unique enzyme-free ELISA probe that enriched PtNPs significantly and owned 500-fold enhanced catalytic ability over that of previous V-Chip, resulting in a significantly improved detection limit. LV-Chip is able to complete a PSA assay for five samples in 20 min. The device was applied to detect PSA in 50 patient serum samples, and the on-chip results demonstrated good correlation with conventional immunoassay. In addition, the PSA levels in finger-prick whole blood samples from healthy volunteers were successfully measured on the device. This completely stand-alone LV-Chip platform enables convenient POC testing for patient follow-up in the physician’s office and is also useful in resource-constrained settings.

Keywords: point-of-care detection, microfluidics, PSA, ultra-sensitive

Procedia PDF Downloads 97
947 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension

Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto

Abstract:

In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.

Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor

Procedia PDF Downloads 256
946 A Study on Relationship between Firm Managers Environmental Attitudes and Environment-Friendly Practices for Textile Firms in India

Authors: Anupriya Sharma, Sapna Narula

Abstract:

Over the past decade, sustainability has gone mainstream as more people are worried about environment-related issues than ever before. These issues are of even more concern for industries which leave a significant impact on the environment. Following these ecological issues, corporates are beginning to comprehend the impact on their business. Many such initiatives have been made to address these emerging issues in the consumer-driven textile industry. Demand from customers, local communities, government regulations, etc. are considered some of the major factors affecting environmental decision-making. Research also shows that motivations to go green are inevitably determined by the way top managers perceive environmental issues as managers personal values and ethical commitment act as a motivating factor towards corporate social responsibility. Little empirical research has been conducted to examine the relationship between top managers’ personal environmental attitudes and corporate environmental behaviors for the textile industry in the Indian context. The primary purpose of this study is to determine the current state of environmental management in textile industry and whether the attitude of textile firms’ top managers is significantly related to firm’s response to environmental issues and their perceived benefits of environmental management. To achieve the aforesaid objectives of the study, authors used structured questionnaire based on literature review. The questionnaire consisted of six sections with a total length of eight pages. The first section was based on background information on the position of the respondents in the organization, annual turnover, year of firm’s establishment and so on. The other five sections of the questionnaire were based upon (drivers, attitude, and awareness, sustainable business practices, barriers to implementation and benefits achieved). To test the questionnaire, a pretest was conducted with the professionals working in corporate sustainability and had knowledge about the textile industry and was then mailed to various stakeholders involved in textile production thereby covering firms top manufacturing officers, EHS managers, textile engineers, HR personnel and R&D managers. The results of the study showed that most of the textile firms were implementing some type of environmental management practice, even though the magnitude of firm’s involvement in environmental management practices varied. The results also show that textile firms with a higher level of involvement in environmental management were more involved in the process driven technical environmental practices. It also identified that firm’s top managers environmental attitudes were correlated with perceived advantages of environmental management as textile firm’s top managers are the ones who possess managerial discretion on formulating and deciding business policies such as environmental initiatives.

Keywords: attitude and awareness, Environmental management, sustainability, textile industry

Procedia PDF Downloads 222
945 Well-Defined Polypeptides: Synthesis and Selective Attachment of Poly(ethylene glycol) Functionalities

Authors: Cristina Lavilla, Andreas Heise

Abstract:

The synthesis of sequence-controlled polymers has received increasing attention in the last years. Well-defined polyacrylates, polyacrylamides and styrene-maleimide copolymers have been synthesized by sequential or kinetic addition of comonomers. However this approach has not yet been introduced to the synthesis of polypeptides, which are in fact polymers developed by nature in a sequence-controlled way. Polypeptides are natural materials that possess the ability to self-assemble into complex and highly ordered structures. Their folding and properties arise from precisely controlled sequences and compositions in their constituent amino acid monomers. So far, solid-phase peptide synthesis is the only technique that allows preparing short peptide sequences with excellent sequence control, but also requires extensive protection/deprotection steps and it is a difficult technique to scale-up. A new strategy towards sequence control in the synthesis of polypeptides is introduced, based on the sequential addition of α-amino acid-N-carboxyanhydrides (NCAs). The living ring-opening process is conducted to full conversion and no purification or deprotection is needed before addition of a new amino acid. The length of every block is predefined by the NCA:initiator ratio in every step. This method yields polypeptides with a specific sequence and controlled molecular weights. A series of polypeptides with varying block sequences have been synthesized with the aim to identify structure-property relationships. All of them are able to adopt secondary structures similar to natural polypeptides, and display properties in the solid state and in solution that are characteristic of the primary structure. By design the prepared polypeptides allow selective modification of individual block sequences, which has been exploited to introduce functionalities in defined positions along the polypeptide chain. Poly(ethylene glycol)(PEG) was the functionality chosen, as it is known to favor hydrophilicity and also yield thermoresponsive materials. After PEGylation, hydrophilicity of the polypeptides is enhanced, and their thermal response in H2O has been studied. Noteworthy differences in the behavior of the polypeptides having different sequences have been found. Circular dichroism measurements confirmed that the α-helical conformation is stable over the examined temperature range (5-90 °C). It is concluded that PEG units are the main responsible of the changes in H-bonding interactions with H2O upon variation of temperature, and the position of these functional units along the backbone is a factor of utmost importance in the resulting properties of the α-helical polypeptides.

Keywords: α-amino acid N-carboxyanhydrides, multiblock copolymers, poly(ethylene glycol), polypeptides, ring-opening polymerization, sequence control

Procedia PDF Downloads 183
944 The Effectiveness of an Educational Program on Awareness of Cancer Signs, Symptoms, and Risk Factors among School Students in Oman

Authors: Khadija Al-Hosni, Moon Fai Chan, Mohammed Al-Azri

Abstract:

Background: Several studies suggest that most school-age adolescents are poorly informed on cancer warning signs and risk factors. Providing adolescents with sufficient knowledge would increase their awareness in adulthood and improve seeking behaviors later. Significant: The results will provide a clear vision in assisting key decision-makers in formulating policies on the students' awareness programs towards cancer. So, the likelihood of avoiding cancer in the future will be increased or even promote early diagnosis. Objectives: to evaluate the effectiveness of an education program designed to increase awareness of cancer signs and symptoms risk factors, improve the behavior of seeking help among school students in Oman, and address the barriers to obtaining medical help. Methods: A randomized controlled trial with two groups was conducted in Oman. A total of 1716 students (n=886/control, n= 830/education), aged 15-17 years, at 10th and 11th grade from 12 governmental schools 3 in governorates from 20-February-2022 to 12-May-2022. Basic demographic data were collected, and the Cancer Awareness Measure (CAM) was used as the primary outcome. Data were collected at baseline (T0) and 4 weeks after (T1). The intervention group received an education program about cancer's cause and its signs and symptoms. In contrast, the control group did not receive any education related to this issue during the study period. Non-parametric tests were used to compare the outcomes between groups. Results: At T0, the lamp was the most recognized cancer warning sign in control (55.0%) and intervention (55.2%) groups. However, there were no significant changes at T1 for all signs in the control group. In contrast, all sign outcomes were improved significantly (p<0.001) in the intervention group, the highest response was unexplained pain (93.3%). Smoking was the most recognized risk factor in both groups: (82.8% for control; 84.1% for intervention) at T0. However, there was no significant change in T1 for the control group, but there was for the intervention group (p<0.001), the highest identification was smoking cigarettes (96.5%). Too scared was the largest barrier to seeking medical help by students in the control group at T0 (63.0%) and T1 (62.8%). However, there were no significant changes in all barriers in this group. Otherwise, being too embarrassed (60.2%) was the largest barrier to seeking medical help for students in the intervention group at T0 and too scared (58.6%) at T1. Although there were reductions in all barriers, significant differences were found in six of ten only (p<0.001). Conclusion: The intervention was effective in improving students' awareness of cancer symptoms, warning signs (p<0.001), and risk factors (p<0.001 reduced the most addressed barriers to seeking medical help (p<0.001) in comparison to the control group. The Ministry of Education in Oman could integrate awareness of cancer within the curriculum, and more interventions are needed on the sociological part to overcome the barriers that interfere with seeking medical help.

Keywords: adolescents, awareness, cancer, education, intervention, student

Procedia PDF Downloads 73
943 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 87
942 Increasing The Role of Civil Society through LAPOR!: National Complaint Handling System in Indonesia

Authors: Izzati Nabiyla Risfa

Abstract:

The role of civil society has become an important issue in national and international level nowadays. Government all over the world started to realize that the involvement of civil society can boost up public services and better policy making. Global Policy Forum stated that there are five good reasons for civil society to be engaged in global governance; (1) to conferring legitimacy on policy decisions; (2) to increasing the pool of policy ideas; (3) to support less powerful governments; (4) countering a lack of political will; and (5) helping states to put nationalism aside. Indonesia also keeps up with this good trend. In November 2011, Indonesian Government set up LAPOR! (means “to report” in Indonesian), an online portal for complaints about public services, which is accessible through its website lapor.ukp.go.id. LAPOR! also accessible through social media (Twitter, Facebook) and text message. This program is an initiative from the government to provide an integrated and accessible portal for the Indonesian public to submit complaints and inquiries as a means of enhancing public participation in national development programs. LAPOR! aims to catalyze public participation as well as to have a more coordinated national complaint handling mechanism. The goal of this program is to increase the role of civil society in order to develop better public services. Thus, LAPOR! works in a simplest way possible. Public can submit any complaints or report their problem concerning development programs and public services simply through the website, short message services to 1708 and mobile applications for BlackBerry and Android. LAPOR! will then transfer every validated input to relevant institutions to be featured and responded on the website. LAPOR! is now integrated with 81 Ministries, 5 local government, and 44 State Owned Enterprise. Public can also give comments, likes or share them through Facebook and Twitter to have a discussion and to ensure the completeness of the reports. LAPOR! has unexpectedly contributed to various successful cases concerning public services. So far the portal has over 280,704 registered users, receiving an average of 1,000 reports every day. Government's response rate increase time to time, with 81% of complaints and inquiries have been solved or are being investigated. This paper will examine the effectiveness of LAPOR! as a tools to increase the role of civil society in order to develop better public services in Indonesia. Beside their promising story, there still are various difficulties that need to be solved. With qualitative approach as methodology for this research, writers will also explore potential improvement of LAPOR! so it can perform effectively as a leading national complaint handling system in Indonesia.

Keywords: civil society, government, Indonesia, public services

Procedia PDF Downloads 474
941 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 267
940 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 337
939 Internationalization Process Model for Construction Firms: Stages and Strategies

Authors: S. Ping Ho, R. Dahal

Abstract:

The global economy has drastically changed how firms operate and compete. Although the construction industry is ‘local’ by its nature, the internationalization of the construction industry has become an inevitable reality. As a result of global competition, staying domestic is no longer safe from competition and, on the contrary, to grow and become an MNE (multi-national enterprise) becomes one of the important strategies for a firm to survive in the global competition. For the successful entrance into competing markets, the firms need to re-define their competitive advantages and re-identify the sources of the competitive advantages. A firm’s initiation of internationalization is not necessarily a result of strategic planning but also involves certain idiosyncratic events that pave the path leading to a firm’s internationalization. For example, a local firm’s incidental or unintentional collaboration with an MNE can become the initiating point of its internationalization process. However, because of the intensive competition in today’s global movement, many firms were compelled to initiate their internationalization as a strategic response to the competition. Understandingly stepping in in the process of internationalization and appropriately implementing the strategies (in the process) at different stages lead the construction firms to a successful internationalization journey. This study is carried out to develop a model of the internationalization process, which derives appropriate strategies that the construction firms can implement at each stage. The proposed model integrates two major and complementary views of internationalization and expresses the dynamic process of internationalization in three stages, which are the pre-international (PRE) stage, the foreign direct investment (FDI) stage, and the multi-national enterprise (MNE) stage. The strategies implied in the proposed model are derived, focusing on capability building, market locations, and entry modes based on the resource-based views: value, rareness, imitability, and substitutability (VRIN). With the proposed dynamic process model the potential construction firms which are willing to expand their business market area can be benefitted. Strategies for internationalization, such as core competence strategy, market selection, partner selection, and entry mode strategy, can be derived from the proposed model. The internationalization process is expressed in two different forms. First, we discuss the construction internationalization process, identify the driving factor/s of the process, and explain the strategy formation in the process. Second, we define the stages of internationalization along the process and the corresponding strategies in each stage. The strategies may include how to exploit existing advantages for the competition at the current stage and develop or explore additional advantages appropriate for the next stage. Particularly, the additionally developed advantages will then be accumulated and drive forward the firm’s stage of internationalization, which will further determine the subsequent strategies, and so on and so forth, spiraling up the stages of a higher degree of internationalization. However, the formation of additional strategies for the next stage does not happen automatically, and the strategy evolution is based on the firm’s dynamic capabilities.

Keywords: construction industry, dynamic capabilities, internationalization process, internationalization strategies, strategic management

Procedia PDF Downloads 47
938 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions

Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead

Abstract:

In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.

Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence

Procedia PDF Downloads 116
937 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly

Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto

Abstract:

Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.

Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau

Procedia PDF Downloads 64
936 Controlling the Release of Cyt C and L- Dopa from pNIPAM-AAc Nanogel Based Systems

Authors: Sulalit Bandyopadhyay, Muhammad Awais Ashfaq Alvi, Anuvansh Sharma, Wilhelm R. Glomm

Abstract:

Release of drugs from nanogels and nanogel-based systems can occur under the influence of external stimuli like temperature, pH, magnetic fields and so on. pNIPAm-AAc nanogels respond to the combined action of both temperature and pH, the former being mostly determined by hydrophilic-to-hydrophobic transitions above the volume phase transition temperature (VPTT), while the latter is controlled by the degree of protonation of the carboxylic acid groups. These nanogels based systems are promising candidates in the field of drug delivery. Combining nanogels with magneto-plasmonic nanoparticles (NPs) introduce imaging and targeting modalities along with stimuli-response in one hybrid system, thereby incorporating multifunctionality. Fe@Au core-shell NPs possess optical signature in the visible spectrum owing to localized surface plasmon resonance (LSPR) of the Au shell, and superparamagnetic properties stemming from the Fe core. Although there exist several synthesis methods to control the size and physico-chemical properties of pNIPAm-AAc nanogels, yet, there is no comprehensive study that highlights the dependence of incorporation of one or more layers of NPs to these nanogels. In addition, effective determination of volume phase transition temperature (VPTT) of the nanogels is a challenge which complicates their uses in biological applications. Here, we have modified the swelling-collapse properties of pNIPAm-AAc nanogels, by combining with Fe@Au NPs using different solution based methods. The hydrophilic-hydrophobic transition of the nanogels above the VPTT has been confirmed to be reversible. Further, an analytical method has been developed to deduce the average VPTT which is found to be 37.3°C for the nanogels and 39.3°C for nanogel coated Fe@Au NPs. An opposite swelling –collapse behaviour is observed for the latter where the Fe@Au NPs act as bridge molecules pulling together the gelling units. Thereafter, Cyt C, a model protein drug and L-Dopa, a drug used in the clinical treatment of Parkinson’s disease were loaded separately into the nanogels and nanogel coated Fe@Au NPs, using a modified breathing-in mechanism. This gave high loading and encapsulation efficiencies (L Dopa: ~9% and 70µg/mg of nanogels, Cyt C: ~30% and 10µg/mg of nanogels respectively for both the drugs. The release kinetics of L-Dopa, monitored using UV-vis spectrophotometry was observed to be rather slow (over several hours) with highest release happening under a combination of high temperature (above VPTT) and acidic conditions. However, the release of L-Dopa from nanogel coated Fe@Au NPs was the fastest, accounting for release of almost 87% of the initially loaded drug in ~30 hours. The chemical structure of the drug, drug incorporation method, location of the drug and presence of Fe@Au NPs largely alter the drug release mechanism and the kinetics of these nanogels and Fe@Au NPs coated with nanogels.

Keywords: controlled release, nanogels, volume phase transition temperature, l-dopa

Procedia PDF Downloads 315
935 Mitigating the Negative Health Effects from Stress - A Social Network Analysis

Authors: Jennifer A. Kowalkowski

Abstract:

Production agriculture (farming) is a physically, emotionally, and cognitively stressful occupation, where workers have little control over the stressors that impact both their work and their lives. In an occupation already rife with hazards, these occupational-related stressors have been shown to increase farm workers’ risks for illness, injury, disability, and death associated with their work. Despite efforts to mitigate the negative health effects from occupational-related stress (ORS) and to promote health and well-being (HWB) among farmers in the US, marked improvements have not been attained. Social support accessed through social networks has been shown to buffer against the negative health effects from stress, yet no studies have directly examined these relationships among farmers. The purpose of this study was to use social network analysis to explore the social networks of farm owner-operators and the social supports available to them for mitigating the negative health effects of ORS. A convenience sample of 71 farm owner-operators from a Midwestern County in the US completed and returned a mailed survey (55.5% response rate) that solicited information about their social networks related to ORS. Farmers reported an average of 2.4 individuals in their personal networks and higher levels of comfort discussing ORS with female network members. Farmers also identified few connections (3.4% density) and indicated low comfort with members of affiliation networks specific to ORS. Findings from this study highlighted that farmers accessed different social networks and resources for their personal HWB than for issues related to occupational(farm-related) health and safety. In addition, farmers’ social networks for personal HWB were smaller, with different relational characteristics than reported in studies of farmers’ social networks related to occupational health and safety. Collectively, these findings suggest that farmers conceptualize personal HWB differently than farm health and safety. Therefore, the same research approaches and targets that guide occupational health and safety research may not be appropriate for personal HWB for farmers. Interventions and programming targeting ORS and HWB have largely been offered through the same platforms or mechanisms as occupational health and safety programs. This may be attributed to the significant overlap between the farm as a family business and place of residence, or that ORS stems from farm-related issues. However, these assumptions translated to health research of farmers and farm families from the occupational health and safety literature have not been directly studied or challenged. Thismay explain why past interventions have not been effective at improving health outcomes for farmers and farm families. A close examination of findings from this study raises important questions for researchers who study agricultural health. Findings from this study have significant implications for future research agendas focused on addressing ORS, HWB, and health disparities for farmersand farm families.

Keywords: agricultural health, occupational-related stress, social networks, well-being

Procedia PDF Downloads 95
934 Bis-Azlactone Based Biodegradable Poly(Ester Amide)s: Design, Synthesis and Study

Authors: Kobauri Sophio, Kantaria Tengiz, Tugushi David, Puiggali Jordi, Katsarava Ramaz

Abstract:

Biodegradable biomaterials (BB) are of high interest for numerous applications in modern medicine as resorbable surgical materials and drug delivery systems. This kind of materials can be cleared from the body after the fulfillment of their function that excludes a surgical intervention for their removal. One of the most promising BBare amino acids based biodegradable poly(ester amide)s (PEAs) which are composed of naturally occurring (α-amino acids) and non-toxic building blocks such as fatty diols and dicarboxylic acids. Key bis-nucleophilic monomers for synthesizing the PEAs are diamine-diesters-di-p-toluenesulfonic acid salts of bis-(α-amino acid)-alkylenediesters (TAADs) which form the PEAs after step-growth polymerization (polycondensation) with bis-electrophilic counter-partners - activated diesters of dicarboxylic acids. The PEAs combine all advantages of the 'parent polymers' – polyesters (PEs) and polyamides (PAs): Ability of biodegradation (PEs), a high affinity with tissues and a wide range of desired mechanical properties (PAs). The scopes of applications of thePEAs can substantially be expanded by their functionalization, e.g. through the incorporation of hydrophobic fragments into the polymeric backbones. Hydrophobically modified PEAs can form non-covalent adducts with various compounds that make them attractive as drug carriers. For hydrophobic modification of the PEAs, we selected so-called 'Azlactone Method' based on the application of p-phenylene-bis-oxazolinons (bis-azlactones, BALs) as active bis-electrophilic monomers in step-growth polymerization with TAADs. Interaction of BALs with TAADs resulted in the PEAs with low MWs (Mw2,800-19,600 Da) and poor material properties. The high-molecular-weight PEAs (Mw up to 100,000) with desirable material properties were synthesized after replacement of a part of BALs with activated diester - di-p-nitrophenylsebacate, or a part of TAAD with alkylenediamine – 1,6-hexamethylenediamine. The new hydrophobically modified PEAs were characterized by FTIR, NMR, GPC, and DSC. It was shown that after the hydrophobic modification the PEAs retain the biodegradability (in vitro study catalyzed by α-chymptrypsin and lipase), and are of interest for constructing resorbable surgical and pharmaceutical devices including drug delivering containers such as microspheres. The new PEAs are insoluble in hydrophobic organic solvents such as chloroform or dichloromethane (swell only) that allowed elaborating a new technology of fabricating microspheres.

Keywords: amino acids, biodegradable polymers, bis-azlactones, microspheres

Procedia PDF Downloads 165
933 Upper Jurassic Foraminiferal Assemblages and Palaeoceanographical Changes in the Central Part of the East European Platform

Authors: Clementine Colpaert, Boris L. Nikitenko

Abstract:

The Upper Jurassic foraminiferal assemblages of the East European Platform have been strongly investigated through the 20th century with biostratigraphical and in smaller degree palaeoecological and palaeobiogeographical purposes. Over the Late Jurassic, the platform was a shallow epicontinental sea that extended from Tethys to the Artic through the Pechora Sea and further toward the northeast in the West Siberian Sea. Foraminiferal assemblages of the Russian Sea were strongly affected by sea-level changes and were controlled by alternated Boreal to Peritethyan influences. The central part of the East European Platform displays very rich and diverse foraminiferal assemblages. Two sections have been analyzed; the Makar'yev Section in the Moscow Depression and the Gorodishi Section in the Yl'yanovsk Depression. Based on the evolution of foraminiferal assemblages, palaeoenvironment has been reconstructed, and sea-level changes have been refined. The aim of this study is to understand palaeoceanographical changes throughout the Oxfordian – Kimmeridgian of the central part of the Russian Sea. The Oxfordian was characterized by a general transgressive event with intermittency of small regressive phases. The platform was connected toward the south with Tethys and Peritethys. During the Middle Oxfordian, opening of a pathway of warmer water from the North-Tethys region to the Boreal Realm favoured the migration of planktonic foraminifera and the appearance of new benthic taxa. It is associated with increased temperature and primary production. During the Late Oxfordian, colder water inputs associated with the microbenthic community crisis may be a response to the closure of this warm-water corridor and the disappearance of planktonic foraminifera. The microbenthic community crisis is probably due to the increased sedimentation rate in the transition from the maximum flooding surface to a second-order regressive event, increasing productivity and inputs of organic matter along with sharp decrease of oxygen into the sediment. It is following during the Early Kimmeridgian by a replacement of foraminiferal assemblages. The almost all Kimmeridgian is characterized by the abundance of many common with Boreal and Subboreal Realm. Connections toward the South began again dominant after a small regressive event recorded during the Late Kimmeridgian and associated with the abundance of many common taxa with Subboreal Realm and Peritethys such as Crimea and Caucasus taxa. Foraminiferal assemblages of the East European Platform are strongly affected by palaeoecological changes and may display a very good model for biofacies typification under Boreal and Subboreal environments. The East European Platform appears to be a key area for the understanding of Upper Jurassic big scale palaeoceanographical changes, being connected with Boreal to Peritethyan basins.

Keywords: foraminifera, palaeoceanography, palaeoecology, upper jurassic

Procedia PDF Downloads 232