Search results for: Engro Foods Limited
946 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 105945 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 266944 Personalized Infectious Disease Risk Prediction System: A Knowledge Model
Authors: Retno A. Vinarti, Lucy M. Hederman
Abstract:
This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk
Procedia PDF Downloads 241943 Morphological and Property Rights Control of Plot Pattern in Urban Regeneration: Case Inspiration from Germany and the United States
Abstract:
As a morphological element reflecting the land property rights structure, the plot pattern plays a crucial role in shaping the form and quality of the built environment. Therefore, it is one of the core control elements of urban regeneration. As China's urban development mode is shifting from growth-based development to urban regeneration, it is urgent to explore a more refined way for the planning control of the plot pattern, which further promotes the optimization of urban form and land property structure. European and American countries such as Germany and the United States began to deal with the planning control of plot patterns in urban regeneration earlier and established relatively mature methods and mechanisms. Therefore, this paper summarizes two typical scenarios of plot pattern regeneration in old cities in China: the first one is "limited scale plot pattern rezoning", which mainly deals with the regeneration scenario of tearing down the old and building the new, and the focus of its control is to establish an adaptive plot pattern rezoning methodology and mechanism; The second is "localized parcel regeneration under the existing property rights," which mainly deals with the renewal scenario of alteration and addition, and its control focuses on the establishment of control rules for individual plot regeneration. For the two typical plot pattern regeneration scenarios, Germany (Berlin) and the United States (New York) are selected as two international cases with reference significance, and the framework of plot pattern form and property rights control elements of urban regeneration is established from four latitudes, namely, the overall operation mode, form control methods, property rights control methods, and effective implementation prerequisites, so as to compare and analyze the plot pattern control methods of the two countries under different land systems and regeneration backgrounds. Among them, the German construction planning system has formed a more complete technical methodology for block-scale rezoning, and together with the overall urban design, it has created a practical example in the critical redevelopment of the inner city of Berlin. In the United States (New York), the zoning method establishes fine zoning regulations and rules for adjusting development rights based on the morphological indicators plots so as to realize effective control over the regeneration of local plots under the existing property rights pattern. On the basis of summarizing the international experience, we put forward the proposal of plot pattern and property rights control for the organic regeneration of old cities in China.Keywords: plot pattern, urban regeneration, urban morphology, property rights, regulatory planning
Procedia PDF Downloads 42942 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa
Authors: Ashenafi Beyene Fanta, Daniel Makina
Abstract:
Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.Keywords: bond market, finance, financial sector, growth
Procedia PDF Downloads 421941 iPSCs More Effectively Differentiate into Neurons on PLA Scaffolds with High Adhesive Properties for Primary Neuronal Cells
Authors: Azieva A. M., Yastremsky E. V., Kirillova D. A., Patsaev T. D., Sharikov R. V., Kamyshinsky R. A., Lukanina K. I., Sharikova N. A., Grigoriev T. E., Vasiliev A. L.
Abstract:
Adhesive properties of scaffolds, which predominantly depend on the chemical and structural features of their surface, play the most important role in tissue engineering. The basic requirements for such scaffolds are biocompatibility, biodegradation, high cell adhesion, which promotes cell proliferation and differentiation. In many cases, synthetic polymers scaffolds have proven advantageous because they are easy to shape, they are tough, and they have high tensile properties. The regeneration of nerve tissue still remains a big challenge for medicine, and neural stem cells provide promising therapeutic potential for cell replacement therapy. However, experiments with stem cells have their limitations, such as low level of cell viability and poor control of cell differentiation. Whereas the study of already differentiated neuronal cell culture obtained from newborn mouse brain is limited only to cell adhesion. The growth and implantation of neuronal culture requires proper scaffolds. Moreover, the polymer scaffolds implants with neuronal cells could demand specific morphology. To date, it has been proposed to use numerous synthetic polymers for these purposes, including polystyrene, polylactic acid (PLA), polyglycolic acid, and polylactide-glycolic acid. Tissue regeneration experiments demonstrated good biocompatibility of PLA scaffolds, despite the hydrophobic nature of the compound. Problem with poor wettability of the PLA scaffold surface could be overcome in several ways: the surface can be pre-treated by poly-D-lysine or polyethyleneimine peptides; roughness and hydrophilicity of PLA surface could be increased by plasma treatment, or PLA could be combined with natural fibers, such as collagen or chitosan. This work presents a study of adhesion of both induced pluripotent stem cells (iPSCs) and mouse primary neuronal cell culture on the polylactide scaffolds of various types: oriented and non-oriented fibrous nonwoven materials and sponges – with and without the effect of plasma treatment and composites with collagen and chitosan. To evaluate the effect of different types of PLA scaffolds on the neuronal differentiation of iPSCs, we assess the expression of NeuN in differentiated cells through immunostaining. iPSCs more effectively differentiate into neurons on PLA scaffolds with high adhesive properties for primary neuronal cells.Keywords: PLA scaffold, neurons, neuronal differentiation, stem cells, polylactid
Procedia PDF Downloads 83940 On Grammatical Metaphors: A Corpus-Based Reflection on the Academic Texts Written in the Field of Environmental Management
Authors: Masoomeh Estaji, Ahdie Tahamtani
Abstract:
Considering the necessity of conducting research and publishing academic papers during Master’s and Ph.D. programs, graduate students are in dire need of improving their writing skills through either writing courses or self-study planning. One key feature that could aid academic papers to look more sophisticated is the application of grammatical metaphors (GMs). These types of metaphors represent the ‘non-congruent’ and ‘implicit’ ways of decoding meaning through which one grammatical category is replaced by another, more implied counterpart, which can alter the readers’ understanding of the text as well. Although a number of studies have been conducted on the application of GMs across various disciplines, almost none has been devoted to the field of environmental management, and the scope of the previous studies has been relatively limited compared to the present work. In the current study, attempts were made to analyze different types of GMs used in academic papers published in top-tiered journals in the field of environmental management, and make a list of the most frequently used GMs based on their functions in this particular discipline to make the teaching of academic writing courses more explicit and the composition of academic texts more well-structured. To fulfill these purposes, a corpus-based analysis based on the two theoretical models of Martin et al. (1997) and Liardet (2014) was run. Through two stages of manual analysis and concordancers, ten recent academic articles entailing 132490 words published in two prestigious journals were precisely scrutinized. The results yielded that through the whole IMRaD sections of the articles, among all types of ideational GMs, material processes were the most frequent types. The second and the third ranks would apply to the relational and mental categories, respectively. Regarding the use of interpersonal GMs, objective expanding metaphors were the highest in number. In contrast, subjective interpersonal metaphors, either expanding or contracting, were the least significant. This would suggest that scholars in the field of Environmental Management tended to shift the focus on the main procedures and explain technical phenomenon in detail, rather than to compare and contrast other statements and subjective beliefs. Moreover, since no instances of verbal ideational metaphors were detected, it could be deduced that the act of ‘saying or articulating’ something might be against the standards of the academic genre. One other assumption would be that the application of ideational GMs is context-embedded and that the more technical they are, the least frequent they become. For further studies, it is suggested that the employment of GMs to be studied in a wider scope and other disciplines, and the third type of GMs known as ‘textual’ metaphors to be included as well.Keywords: English for specific purposes, grammatical metaphor, academic texts, corpus-based analysis
Procedia PDF Downloads 166939 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems
Authors: Moustafa Osman Mohammed
Abstract:
The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling
Procedia PDF Downloads 154938 Creativity and Innovation in Postgraduate Supervision
Authors: Rajendra Chetty
Abstract:
The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.Keywords: interdisciplinarity, internationalisation, postgraduate, supervision
Procedia PDF Downloads 237937 Improving Low English Oral Skills of 5 Second-Year English Major Students at Debark University
Authors: Belyihun Muchie
Abstract:
This study investigates the low English oral communication skills of 5 second-year English major students at Debark University. It aims to identify the key factors contributing to their weaknesses and propose effective interventions to improve their spoken English proficiency. Mixed-methods research will be employed, utilizing observations, questionnaires, and semi-structured interviews to gather data from the participants. To clearly identify these factors, structured and informal observations will be employed; the former will be used to identify their fluency, pronunciation, vocabulary use, and grammar accuracy, and the later will be suited to observe the natural interactions and communication patterns of learners in the classroom setting. The questionnaires will assess their self-perceptions of their skills, perceived barriers to fluency, and preferred learning styles. Interviews will also delve deeper into their experiences and explore specific obstacles faced in oral communication. Data analysis will involve both quantitative and qualitative responses. The structured observation and questionnaire will be analyzed quantitatively, whereas the informal observation and interview transcripts will be analyzed thematically. Findings will be used to identify the major causes of low oral communication skills, such as limited vocabulary, grammatical errors, pronunciation difficulties, or lack of confidence. They are also helpful to develop targeted solutions addressing these causes, such as intensive pronunciation practice, conversation simulations, personalized feedback, or anxiety-reduction techniques. Finally, the findings will guide designing an intervention plan for implementation during the action research phase. The study's outcomes are expected to provide valuable insights into the challenges faced by English major students in developing oral communication skills, contribute to the development of evidence-based interventions for improving spoken English proficiency in similar contexts, and offer practical recommendations for English language instructors and curriculum developers to enhance student learning outcomes. By addressing the specific needs of these students and implementing tailored interventions, this research aims to bridge the gap between theoretical knowledge and practical speaking ability, equipping them with the confidence and skills to flourish in English communication settings.Keywords: oral communication skills, mixed-methods, evidence-based interventions, spoken English proficiency
Procedia PDF Downloads 50936 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 81935 Comparative Analysis of the Antioxidant Capacities of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)
Authors: Soo Im Chung, Lara Marie Pangan Lo, Yao Cheng Zhang, Su Jin Nam, Xingyue Jin, Mi Young Kang
Abstract:
Rice (Oryza sativa L.) is one of the most widely consumed grains. Due to the growing number of demand as a potential functional food and nutraceutical source and the increasing awareness of people towards healthy diet and good quality of living, more researches dwell upon the development of new rice cultivars for population consumption. However, studies on the antioxidant capacities of newly developed rice were limited as well as the effects of germination in these rice cultivars. Therefore, this study aimed to focus on analysis of the antioxidant potential of pre-germinated and germinated pigmented rice cultivars in South Korea such as purple cultivar Superjami (SJ) and red cultivar Super hongmi (SH) in comparison with the non-pigmented Normal Brown (NB) Rice. The powdered rice grain samples were extracted with 80% methanol and their antioxidant activities were determined. The Results showed that pre-germinated pigmented rice cultivars have higher Fe2+ Chelating Ability (Fe2+), Reducing Power (RP), 2,2´-azinobis[3-ethylbenzthiazoline]-6-sulfonic acid (ABTS) radical scavenging and Superoxide Dismutase activity than the control NB rice. Moreover, it is revealed that germination process induced a significant increased in the antioxidant activities of all the rice samples regardless of their strains. Purple rice SJ showed greater Fe2+ (88.82 + 0.53%), RP (0.82 + 0.01) , ABTS (143.63 + 2.38 mg VCEAC/100 g) and SOD (59.31 + 0.48%) activities than the red grain SH and the control NB having the lowest antioxidant potential among the three (3) rice samples examined. The Effective concentration at 50% (EC50) of 1, 1-Diphenyl-2-picrylhydrazyl (DPPH) and Hydroxyradical (-OH) Scavenging activity for the rice samples were also obtained. SJ showed lower EC50 in terms of its DPPH (3.81 + 0.15 mg/mL) and –OH (5.19 + 0.08 mg/mL) radical scavenging activities than the red grain SH and control NB rice indicating that at lower concentrations, it can readily exhibit antioxidant effects against reactive oxygen species (ROS). These results clearly suggest the higher antioxidant potential of pigmented rice varieties as compared with the widely consumed NB rice. Also, it is revealed in the study that even at lower concentrations, pigmented rice varieties can exhibit their antioxidant activities. Germination process further enhanced the antioxidant capacities of the rice samples regardless of their types. With these results at hand, these new rice varieties can be further developed as a good source of bio functional elements that can help alleviate the growing number of cases of metabolic disorders.Keywords: antioxidant capacity, germinated rice, pigmented rice, super hongmi, superjami
Procedia PDF Downloads 442934 Finite Element Molecular Modeling: A Structural Method for Large Deformations
Authors: A. Rezaei, M. Huisman, W. Van Paepegem
Abstract:
Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.Keywords: finite element, large deformation, molecular mechanics, structural method
Procedia PDF Downloads 151933 Subjectivity in Miracle Aesthetic Clinic Ambient Media Advertisement
Authors: Wegig Muwonugroho
Abstract:
Subjectivity in advertisement is a ‘power’ possessed by advertisements to construct trend, concept, truth, and ideology through subconscious mind. Advertisements, in performing their functions as message conveyors, use such visual representation to inspire what’s ideal to the people. Ambient media is advertising medium making the best use of the environment where the advertisement is located. Miracle Aesthetic Clinic (Miracle) popularizes the visual representation of its ambient media advertisement through the omission of face-image of both female mannequins that function as its ambient media models. Usually, the face of a model in advertisement is an image commodity having selling values; however, the faces of ambient media models in Miracle advertisement campaign are suppressed over the table and wall. This face concealing aspect creates not only a paradox of subjectivity but also plurality of meaning. This research applies critical discourse analysis method to analyze subjectivity in obtaining the insight of ambient media’s meaning. First, in the stage of textual analysis, the embedding attributes upon female mannequins imply that the models are denoted as the representation of modern women, which are identical with the identities of their social milieus. The communication signs aimed to be constructed are the women who lose their subjectivities and ‘feel embarrassed’ to flaunt their faces to the public because of pimples on their faces. Second, in the stage of analysis of discourse practice, it points out that ambient media as communication media has been comprehensively responded by the targeted audiences. Ambient media has a role as an actor because of its eyes-catching setting, and taking space over the area where the public are wandering around. Indeed, when the public realize that the ambient media models are motionless -unlike human- stronger relation then appears, marked by several responses from targeted audiences. Third, in the stage of analysis of social practice, soap operas and celebrity gossip shows on the television become a dominant discourse influencing advertisement meaning. The subjectivity of Miracle Advertisement corners women by the absence of women participation in public space, the representation of women in isolation, and the portrayal of women as an anxious person in the social rank when their faces suffered from pimples. The Ambient media as the advertisement campaign of Miracle is quite success in constructing a new trend discourse of face beauty that is not limited on benchmarks of common beauty virtues, but the idea of beauty can be presented by ‘when woman doesn’t look good’ visualization.Keywords: ambient media, advertisement, subjectivity, power
Procedia PDF Downloads 321932 The Relationship between the Skill Mix Model and Patient Mortality: A Systematic Review
Authors: Yi-Fung Lin, Shiow-Ching Shun, Wen-Yu Hu
Abstract:
Background: A skill mix model is regarded as one of the most effective methods of reducing nursing shortages, as well as easing nursing staff workloads and labor costs. Although this model shows several benefits for the health workforce, the relationship between the optimal model of skill mix and the patient mortality rate remains to be discovered. Objectives: This review aimed to explore the relationship between the skill mix model and patient mortality rate in acute care hospitals. Data Sources: A systematic search of the PubMed, Web of Science, Embase, and Cochrane Library databases and researchers retrieved studies published between January 1986 and March 2022. Review methods: Two independent reviewers screened the titles and abstracts based on selection criteria, extracted the data, and performed critical appraisals using the STROBE checklist of each included study. The studies focused on adult patients in acute care hospitals, and the skill mix model and patient mortality rate were included in the analysis. Results: Six included studies were conducted in the USA, Canada, Italy, Taiwan, and European countries (Belgium, England, Finland, Ireland, Spain, and Switzerland), including patients in medical, surgical, and intensive care units. There were both nurses and nursing assistants in their skill mix team. This main finding is that three studies (324,592 participants) show evidence of fewer mortality rates associated with hospitals with a higher percentage of registered nurse staff (range percentage of registered nurse staff 36.1%-100%), but three articles (1,122,270 participants) did not find the same result (range of percentage of registered nurse staff 46%-96%). However, based on appraisal findings, those showing a significant association all meet good quality standards, but only one-third of their counterparts. Conclusions: In light of the limited amount and quality of published research in this review, it is prudent to treat the findings with caution. Although the evidence is not insufficient certainty to draw conclusions about the relationship between nurse staffing level and patients' mortality, this review lights the direction of relevant studies in the future. The limitation of this article is the variation in skill mix models among countries and institutions, making it impossible to do a meta-analysis to compare them further.Keywords: nurse staffing level, nursing assistants, mortality, skill mix
Procedia PDF Downloads 115931 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 450930 Factors Influencing Capital Structure: Evidence from the Oil and Gas Industry of Pakistan
Authors: Muhammad Tahir, Mushtaq Muhammad
Abstract:
Capital structure is one of the key decisions taken by the financial managers. This study aims to investigate the factors influencing capital structure decision in Oil and Gas industry of Pakistan using secondary data from published annual reports of listed Oil and Gas Companies of Pakistan. This study covers the time-period from 2008-2014. Capital structure can be affected by profitability, firm size, growth opportunities, dividend payout, liquidity, business risk, and ownership structure. Panel data technique with Ordinary least square (OLS) regression model has been used to find the impact of set of explanatory variables on the capital structure using the Stata. OLS regression results suggest that dividend payout, firm size and government ownership have the most significant impact on financial leverage. Dividend payout and government ownership are found to have significant negative association with financial leverage however firm size indicated positive relationship with financial leverage. Other variables having significant link with financial leverage includes growth opportunities, liquidity and business risk. Results reveal significant positive association between growth opportunities and financial leverage whereas liquidity and business risk are negatively correlated with financial leverage. Profitability and managerial ownership exhibited insignificant relationship with financial leverage. This study contributes to existing Managerial Finance literature with certain managerial implications. Academically, this research study describes the factors affecting capital structure decision of Oil and Gas Companies in Pakistan and adds latest empirical evidence to existing financial literature in Pakistan. Researchers have studies capital structure in Pakistan in general and industry at specific, nevertheless still there is limited literature on this issue. This study will be an attempt to fill this gap in the academic literature. This study has practical implication on both firm level and individual investor/ lenders level. Results of this study can be useful for investors/ lenders in making investment and lending decisions. Further, results of this study can be useful for financial managers to frame optimal capital structure keeping in consideration the factors that can affect capital structure decision as revealed by this study. These results will help financial managers to decide whether to issue stock or issue debt for future investment projects.Keywords: capital structure, multicollinearity, ordinary least square (OLS), panel data
Procedia PDF Downloads 292929 Long Term Survival after a First Transient Ischemic Attack in England: A Case-Control Study
Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski
Abstract:
Transient ischaemic attacks (TIAs) are warning signs for future strokes. TIA patients are at increased risk of stroke and cardio-vascular events after a first episode. A majority of studies on TIA focused on the occurrence of these ancillary events after a TIA. Long-term mortality after TIA received only limited attention. We undertook this study to determine the long-term hazards of all-cause mortality following a first episode of a TIA using anonymised electronic health records (EHRs). We used a retrospective case-control study using electronic primary health care records from The Health Improvement Network (THIN) database. Patients born prior to or in year 1960, resident in England, with a first diagnosis of TIA between January 1986 and January 2017 were matched to three controls on age, sex and general medical practice. The primary outcome was all-cause mortality. The hazards of all-cause mortality were estimated using a time-varying Weibull-Cox survival model which included both scale and shape effects and a random frailty effect of GP practice. 20,633 cases and 58,634 controls were included. Cases aged 39 to 60 years at the first TIA event had the highest hazard ratio (HR) of mortality compared to matched controls (HR = 3.04, 95% CI (2.91 - 3.18)). The HRs for cases aged 61-70 years, 71-76 years and 77+ years were 1.98 (1.55 - 2.30), 1.79 (1.20 - 2.07) and 1.52 (1.15 - 1.97) compared to matched controls. Aspirin provided long-term survival benefits to cases. Cases aged 39-60 years on aspirin had HR of 0.93 (0.84 - 1.00), 0.90 (0.82 - 0.98) and 0.88 (0.80 - 0.96) at 5 years, 10 years and 15 years, respectively, compared to cases in the same age group who were not on antiplatelets. Similar beneficial effects of aspirin were observed in other age groups. There were no significant survival benefits with other antiplatelet options. No survival benefits of antiplatelet drugs were observed in controls. Our study highlights the excess long-term risk of death of TIA patients and cautions that TIA should not be treated as a benign condition. The study further recommends aspirin as the better option for secondary prevention for TIA patients compared to clopidogrel recommended by NICE guidelines. Management of risk factors and treatment strategies should be important challenges to reduce the burden of disease.Keywords: dual antiplatelet therapy (DAPT), General Practice, Multiple Imputation, The Health Improvement Network(THIN), hazard ratio (HR), Weibull-Cox model
Procedia PDF Downloads 145928 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022
Authors: Yoseph Halala Handiso, Zewdi Gebregziabher
Abstract:
Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia
Procedia PDF Downloads 65927 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 144926 Doped TiO2 Thin Films Microstructural and Electrical Properties
Authors: Mantas Sriubas, Kristina Bockute, Darius Virbukas, Giedrius Laukaitis
Abstract:
In this work, the doped TiO2 (dopants – Ca, Mg) was investigated. The comparison between the physical vapour deposition methods as electron beam vapour deposition and magnetron sputtering was performed and the structural and electrical properties of the formed thin films were investigated. Thin films were deposited on different type of substrates: SiO2, Alloy 600 (Fe-Ni-Cr) and Al2O3 substrates. The structural properties were investigated using Ambios XP-200 profilometer, scanning electron microscope (SEM) Hitachi S-3400N, X-ray energy-dispersive spectroscope (EDS) Quad 5040 (Bruker AXS Microanalysis GmbH), X-ray diffractometer (XRD) D8 Discover (Bruker AXS GmbH) with glancing angles focusing geometry in a 20 – 70° range using the Cu Kα1 λ = 0.1540562 nm radiation). The impedance spectroscopy measurements were performed using Probostat® (NorECs AS) measurement cell in the frequency range from 10-1-106 Hz under reducing and oxidizing conditions in temperature range of 200 °C to 1200 °C. The investigation of the e-beam deposited Ca and Mg doped-TiO2 thin films shows that the thin films are dense without any visible pores and cavities and the thin films grow in zone T according Barna-Adamik SZM. Substrate temperature was kept 600 °C during the deposition and Ts/Tm ≈ 0.32 (substrate temperature (Ts) and coating material melting temperature (Tm)). The surface diffusion is high however, the grain boundary migration is strongly limited at this temperature. This means that structure is inhomogeneous and the columnar structure is mostly visible in the upper part of the films. According to XRD, the increasing of the Ca dopants’ concentration increases the crystallinity of the formed thin films and the crystallites size increase linearly and Ca dopants act as prohibitors. Thin films are comprised of anatase TiO2 phase with an exception of 2 % Ca doped TiO2, where a small peak of Ca arise. In the case of Mg doped-TiO2 the intensities of the XRD peaks decreases with increasing Mg molar concentration. It means that there are less diffraction planes of the particular orientation in thin films with higher impurities concentration. Thus, the crystallinity decreases with increasing Mg concentration and Mg dopants act as inhibitors. The impedance measurements show that the dopants changed the conductivity of the formed thin films. The conductivity varies from 10-3 S/cm to 10-4 S/cm at 800 °C under wet reducing conditions. The microstructure of the magnetron sputtered thin TiO2 films is different comparing to the thin films deposited using e-beam deposition therefore influencing other structural and electrical properties.Keywords: electrical properties, electron beam deposition, magnetron sputtering, microstructure, titanium dioxide
Procedia PDF Downloads 293925 Mechanical Characterization and CNC Rotary Ultrasonic Grinding of Crystal Glass
Authors: Ricardo Torcato, Helder Morais
Abstract:
The manufacture of crystal glass parts is based on obtaining the rough geometry by blowing and/or injection, generally followed by a set of manual finishing operations using cutting and grinding tools. The forming techniques used do not allow the obtainment, with repeatability, of parts with complex shapes and the finishing operations use intensive specialized labor resulting in high cycle times and production costs. This work aims to explore the digital manufacture of crystal glass parts by investigating new subtractive techniques for the automated, flexible finishing of these parts. Finishing operations are essential to respond to customer demands in terms of crystal feel and shine. It is intended to investigate the applicability of different computerized finishing technologies, namely milling and grinding in a CNC machining center with or without ultrasonic assistance, to crystal processing. Research in the field of grinding hard and brittle materials, despite not being extensive, has increased in recent years, and scientific knowledge about the machinability of crystal glass is still very limited. However, it can be said that the unique properties of glass, such as high hardness and very low toughness, make any glass machining technology a very challenging process. This work will measure the performance improvement brought about by the use of ultrasound compared to conventional crystal grinding. This presentation is focused on the mechanical characterization and analysis of the cutting forces in CNC machining of superior crystal glass (Pb ≥ 30%). For the mechanical characterization, the Vickers hardness test provides an estimate of the material hardness (Hv) and the fracture toughness based on cracks that appear in the indentation. Mechanical impulse excitation test estimates the Young’s Modulus, shear modulus and Poisson ratio of the material. For the cutting forces, it a dynamometer was used to measure the forces in the face grinding process. The tests were made based on the Taguchi method to correlate the input parameters (feed rate, tool rotation speed and depth of cut) with the output parameters (surface roughness and cutting forces) to optimize the process (better roughness using the cutting forces that do not compromise the material structure and the tool life) using ANOVA. This study was conducted for conventional grinding and for the ultrasonic grinding process with the same cutting tools. It was possible to determine the optimum cutting parameters for minimum cutting forces and for minimum surface roughness in both grinding processes. Ultrasonic-assisted grinding provides a better surface roughness than conventional grinding.Keywords: CNC machining, crystal glass, cutting forces, hardness
Procedia PDF Downloads 152924 Single-Parent Families and Its Impact on the Psycho Child Development in Schools
Authors: Sylvie Sossou, Grégoire Gansou, Ildevert Egue
Abstract:
Introduction: The mission of the family and the school is to educate and train citizens of the city. But the family’s values , parental roles, respect for life collapse in their traditional African form. Indeed laxity with regard to divorce, liberal ideas about child rearing influence the emotional life of the latter. Several causes may contribute to the decline in academic performance. In order to seek a psychological solution to the issue, a study was conducted in 6 schools at the 9th district in Cotonou, cosmopolitan city of Benin. Objective: To evaluate the impact of single parenthood on the psycho child development. Materials and Methods: Questionnaires and interviews were used to gather verbal information. The questionnaires were administered to parents and children (schoolchildren 4, 5 and six form) from 7 to 12 years in lone parenthood. The interview was done with teachers and school leaders. We identified 209 cases of children living with a "single-parent" and 68 single parents. Results: Of the 209 children surveyed the results showed that 116 children are cut relational triangle in early childhood (before 3 years). The psychological effects showed that the separation has caused sadness for 52 children, anger 22, shame 17, crying at 31 children, fear for 14, the silence at 58 children. In front of complete family’s children, these children experience feelings of aggression in 11.48%; sadness in 30.64%; 5.26% the shame, the 6.69% tears; jealousy in 2.39% and 2.87% of indifference. The option to get married in 44.15% of children is a challenge to want to give a happy childhood for their offspring; 22.01% feel rejected, there is uncertainty for 11.48% of cases and 25.36% didn’t give answer. 49, 76% of children want to see their family together; 7.65% are against to avoid disputes and in many cases to save the mother of the father's physical abuse. 27.75% of the ex-partners decline responsibility in the care of the child. Furthermore family difficulties affecting the intellectual capacities of children: 37.32% of children see school difficulties related to family problems despite all the pressure single-parent to see his child succeed. Single parenthood affects inter-family relations: pressure 33.97%; nervousness 24.88%; overprotection 29.18%; backbiting 11.96%, are the lives of these families. Conclusion: At the end of the investigation, results showed that there is a causal relationship between psychological disorders, academic difficulties of children and quality of parental relationships. Other cases may exist, but the lack of resources meant that we have only limited at 6 schools. Early psychological treatment for these children is needed.Keywords: single-parent, psycho child, school, Cotonou
Procedia PDF Downloads 387923 Association between Organophosphate Pesticides Exposure and Cognitive Behavior in Taipei Children
Authors: Meng-Ying Chiu, Yu-Fang Huang, Pei-Wei Wang, Yi-Ru Wang, Yi-Shuan Shao, Mei-Lien Chen
Abstract:
Background: Organophosphate pesticides (OPs) are the most heavily used pesticides in agriculture in Taiwan. Therefore, they are commonly detected in general public including pregnant women and children. These compounds are proven endocrine disrupters that may affect the neural development in humans. The aim of this study is to assess the OPs exposure of children in 2 years of age and to examine the association between the exposure concentrations and neurodevelopmental effects in children. Methods: In a prospective cohort of 280 mother-child pairs, urine samples of prenatal and postnatal were collected from each participant and analyzed for metabolites of OPs by using gas chromatography-mass spectrometry. Six analytes were measured including dimethylphosphate (DMP), dimethylthiophosphate (DMTP), dimethyldithiophosphate (DMDTP), diethylphosphate (DEP), diethylthiophosphate (DETP), and diethyldithiophosphate (DEDTP). This study created a combined concentration measure for dimethyl compounds (DMs) consisting of the three dimethyl metabolites (DMP, DMTP, and DMDTP), for diethyl compounds (DEs) consisting of the three diethyl metabolites (DEP, DETP, and DEDTP) and six dialkyl phosphate (DAPs). The Bayley Scales of Infant and Toddler Development (Bayley-III) was used to assess children's cognitive behavior at 2 years old. The association between OPs exposure and Bayley-III scale score was determined by using the Mann-Whitney U test. Results: The measurements of urine samples are still on-going. This preliminary data are the report of 56 children aged 2 from the cohort. The detection rates for DMP, DMTP, DMDTP, DEP, DETP, and DEDTP are 80.4%, 69.6%, 64.3%, 64.3%, 62.5%, and 75%, respectively. After adjusting the creatinine concentrations of urine, the median (nmol/g creatinine) of urinary DMP, DMTP, DMDTP, DEP, DETP, DEDTP, DMs, DEs, and DAPs are 153.14, 53.32, 52.13, 19.24, 141.65, 192.17, 308.8, 311.6, and 702.11, respectively. The concentrations of urine are considerably higher than that in other countries. Children’s cognitive behavior was used three scales for Bayley-III, including cognitive, language and motor. In Mann-Whitney U test, the higher levels of DEs had significantly lower motor score (p=0.037), but no significant association was found between the OPs exposure levels and the score of either cognitive or language. Conclusion: The limited sample size suggests that Taipei children are commonly exposed to OPs and OPs exposure might affect the cognitive behavior of young children. This report will present more data to verify the results. The predictors of OPs concentrations, such as dietary pattern will also be included.Keywords: biomonitoring, children, neurodevelopment, organophosphate pesticides exposure
Procedia PDF Downloads 141922 Occupational Heat Stress Related Adverse Pregnancy Outcome: A Pilot Study in South India Workplaces
Authors: Rekha S., S. J. Nalini, S. Bhuvana, S. Kanmani, Vidhya Venugopal
Abstract:
Introduction: Pregnant women's occupational heat exposure has been linked to foetal abnormalities and pregnancy complications. The presence of heat in the workplace is expected to lead to Adverse Pregnancy Outcomes (APO), especially in tropical countries where temperatures are rising and workplace cooling interventions are minimal. For effective interventions, in-depth understanding and evidence about occupational heat stress and APO are required. Methodology: Approximately 800 pregnant women in and around Chennai who were employed in jobs requiring moderate to hard labour participated in the cohort research. During the study period (2014-2019), environmental heat exposures were measured using a Questemp WBGT monitor, and heat strain markers, such as Core Body Temperature (CBT) and Urine Specific Gravity (USG), were evaluated using an Infrared Thermometer and a refractometer, respectively. Using a valid HOTHAPS questionnaire, self-reported health symptoms were collected. In addition, a postpartum follow-up with the mothers was done to collect APO-related data. Major findings of the study: Approximately 47.3% of pregnant workers have workplace WBGTs over the safe manual work threshold value for moderate/heavy employment (Average WBGT of 26.6°C±1.0°C). About 12.5% of the workers had CBT levels above the usual range, and 24.8% had USG levels above 1.020, both of which suggested mild dehydration. Miscarriages (3%), stillbirths/preterm births (3.5%), and low birth weights (8.8%) were the most common unfavorable outcomes among pregnant employees. In addition, WBGT exposures above TLVs during all trimesters were associated with a 2.3-fold increased risk of adverse fetal/maternal outcomes (95% CI: 1.4-3.8), after adjusting for potential confounding variables including age, education, socioeconomic status, abortion history, stillbirth, preterm, LBW, and BMI. The study determined that WBGTs in the workplace had direct short- and long-term effects on the health of both the mother and the foetus. Despite the study's limited scope, the findings provided valuable insights and highlighted the need for future comprehensive cohort studies and extensive data in order to establish effective policies to protect vulnerable pregnant women from the dangers of heat stress and to promote reproductive health.Keywords: adverse outcome, heat stress, interventions, physiological strain, pregnant women
Procedia PDF Downloads 72921 Effects of Bleaching Procedures on Dentine Sensitivity
Authors: Suhayla Reda Al-Banai
Abstract:
Problem Statement: Tooth whitening was used for over one hundred and fifty year. The question concerning the whiteness of teeth is a complex one since tooth whiteness will vary from individual to individual, dependent on age and culture, etc. Tooth whitening following treatment may be dependent on the type of whitening system used to whiten the teeth. There are a few side-effects to the process, and these include tooth sensitivity and gingival irritation. Some individuals may experience no pain or sensitivity following the procedure. Purpose: To systematically review the available published literature until 31st December 2021 to identify all relevant studies for inclusion and to determine whether there was any evidence demonstrating that the application of whitening procedures resulted in the tooth sensitivity. Aim: Systematically review the available published works of literature to identify all relevant studies for inclusion and to determine any evidence demonstrating that application of 10% & 15% carbamide peroxide in tooth whitening procedures resulted in tooth sensitivity. Material and Methods: Following a review of 70 relevant papers from searching both electronic databases (OVID MEDLINE and PUBMED) and hand searching of relevant written journals, 49 studies were identified, 42 papers were subsequently excluded, and 7 studies were finally accepted for inclusion. The extraction of data for inclusion was conducted by two reviewers. The main outcome measures were the methodology and assessment used by investigators to evaluate tooth sensitivity in tooth whitening studies. Results: The reported evaluation of tooth sensitivity during tooth whitening procedures was based on the subjective response of subjects rather than a recognized methodology for evaluating. One of the problems in evaluating was the lack of homogeneity in study design. Seven studies were included. The studies included essential features namely: randomized group, placebo controls, doubleblind and single-blind. Drop-out was obtained from two of included studies. Three of the included studies reported sensitivity at the baseline visit. Two of the included studies mentioned the exclusion criteria Conclusions: The results were inconclusive due to: Limited number of included studies, the study methodology, and evaluation of DS reported. Tooth whitening procedures adversely affect both hard and soft tissues in the oral cavity. Sideeffects are mild and transient in nature. Whitening solutions with greater than 10% carbamide peroxide causes more tooth sensitivity. Studies using nightguard vital bleaching with 10% carbamide peroxide reported two side effects tooth sensitivity and gingival irritation, although tooth sensitivity was more prevalent than gingival irritationKeywords: dentine, sensitivity, bleaching, carbamide peroxde
Procedia PDF Downloads 69920 Combined Impact of Physical Activity and Dietary Quality on Depression Symptoms in U.S. Adults: An Analysis of NHANES 2007-2020 Data
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Depression has emerged as a growing public health issue, with the limited effectiveness of current treatment methods driving the search for modifiable lifestyle factors. Physical inactivity and poor dietary habits are consistently identified as factors associated with increased depression symptoms. While the independent effects of physical activity (PA) and dietary quality (DQ) on mental health are well established, the combined influence of both factors on depression has not been thoroughly examined in a representative sample of U.S. adults. This study aims to explore the individual and joint associations of PA and DQ with depression symptoms, highlighting their combined impact on adults across the U.S. Using data from the National Health and Nutrition Examination Survey (NHANES) from 2007 to 2020, we evaluated the relationships between PA (measured through metabolic equivalent (MET) minutes per week) and DQ (assessed using the Healthy Eating Index [HEI]-2015) and depression symptoms (defined by a score of ≥10 on the 9-item Patient Health Questionnaire [PHQ-9]). Participants were classified into four lifestyle categories: (1) healthy diet and active, (2) unhealthy diet but active, (3) healthy diet but inactive, and (4) unhealthy diet and inactive. Logistic regression models adjusted for relevant covariates were used to examine associations, with age-adjusted prevalence rates for depression calculated according to NHANES guidelines. Data from 21,530 participants, representing approximately 954 million U.S. adults aged 20-80 years, were analyzed. The overall age-adjusted prevalence of depression symptoms was 7.15%. A total of 83.1% of participants met PA recommendations, and 27.3% scored above the 60th percentile in the HEI-2015 index. Higher PA levels were inversely related to depression symptoms (adjusted odds ratio [AOR]: 0.805; 95% CI: 0.724-0.920), as was better dietary quality (AOR: 0.788; 95% CI: 0.690-0.910). A combination of healthy diet and adequate PA was associated with the lowest risk of depression symptoms (AOR: 0.635; 95% CI: 0.520-0.775) compared to inactive participants with unhealthy diets. Notably, participants with either a healthy diet or adequate PA but not both did not experience the same reduction in depression risk. This study highlights that the combination of a healthy diet and regular physical activity offers a synergistic protective effect against depression symptoms in U.S. adults. Public health initiatives targeting both dietary improvements and increased physical activity may significantly reduce the burden of depression across populations. Further research should focus on understanding the mechanisms underlying these interactions.Keywords: dietary quality, physical activity, depression, healthy eating
Procedia PDF Downloads 6919 Assessing the Competence of Oral Surgery Trainees: A Systematic Review
Authors: Chana Pavneet
Abstract:
Background: In more recent years in dentistry, a greater emphasis has been placed on competency-based education (CBE) programmes. Undergraduate and postgraduate curriculums have been reformed to reflect these changes, and adopting a CBE approach has shown to be beneficial to trainees and places an emphasis on continuous lifelong learning. The literature is vast; however, very little work has been done specifically to the assessment of competence in dentistry and even less so in oral surgery. The majority of the literature tends to opinion pieces. Some small-scale studies have been undertaken in this area researching assessment tools which can be used to assess competence in oral surgery. However, there is a lack of general consensus on the preferable assessment methods. The aim of this review is to identify the assessment methods available and their usefulness. Methods: Electronic databases (Medline, Embase, and the Cochrane Database of systematic reviews) were searched. PRISMA guidelines were followed to identify relevant papers. Abstracts of studies were reviewed, and if they met the inclusion criteria, they were included in the review. Papers were reviewed against the critical appraisal skills programme (CASP) checklist and medical education research quality instrument (MERQSI) to assess their quality and identify any bias in a systematic manner. The validity and reliability of each assessment method or tool were assessed. Results: A number of assessment methods were identified, including self-assessment, peer assessment, and direct observation of skills by someone senior. Senior assessment tended to be the preferred method, followed by self-assessment and, finally, peer assessment. The level of training was shown to affect the preferred assessment method, with one study finding peer assessment more useful in postgraduate trainees as opposed to undergraduate trainees. Numerous tools for assessment were identified, including a checklist scale and a global rating scale. Both had their strengths and weaknesses, but the evidence was more favourable for global rating scales in terms of reliability, applicability to more clinical situations, and easier to use for examiners. Studies also looked into trainees’ opinions on assessment tools. Logbooks were not found to be significant in measuring the competence of trainees. Conclusion: There is limited literature exploring the methods and tools which assess the competence of oral surgery trainees. Current evidence shows that the most favourable assessment method and tool may differ depending on the stage of training. More research is required in this area to streamline assessment methods and tools.Keywords: competence, oral surgery, assessment, trainees, education
Procedia PDF Downloads 133918 The Ideal Memory Substitute for Computer Memory Hierarchy
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
Computer system components such as the CPU, the Controllers, and the operating system, work together as a team, and storage or memory is the essential parts of this team apart from the processor. The memory and storage system including processor caches, main memory, and storage, form basic storage component of a computer system. The characteristics of the different types of storage are inherent in the design and the technology employed in the manufacturing. These memory characteristics define the speed, compatibility, cost, volatility, and density of the various storage types. Most computers rely on a hierarchy of storage devices for performance. The effective and efficient use of the memory hierarchy of the computer system therefore is the single most important aspect of computer system design and use. The memory hierarchy is becoming a fundamental performance and energy bottleneck, due to the widening gap between the increasing demands of modern computer applications and the limited performance and energy efficiency provided by traditional memory technologies. With the dramatic development in the computers systems, computer storage has had a difficult time keeping up with the processor speed. Computer architects are therefore facing constant challenges in developing high-speed computer storage with high-performance which is energy-efficient, cost-effective and reliable, to intercept processor requests. It is very clear that substantial advancements in redesigning the existing memory physical and logical structures to meet up with the latest processor potential is crucial. This research work investigates the importance of computer memory (storage) hierarchy in the design of computer systems. The constituent storage types of the hierarchy today were investigated looking at the design technologies and how the technologies affect memory characteristics: speed, density, stability and cost. The investigation considered how these characteristics could best be harnessed for overall efficiency of the computer system. The research revealed that the best single type of storage, which we refer to as ideal memory is that logical single physical memory which would combine the best attributes of each memory type that make up the memory hierarchy. It is a single memory with access speed as high as one found in CPU registers, combined with the highest storage capacity, offering excellent stability in the presence or absence of power as found in the magnetic and optical disks as against volatile DRAM, and yet offers a cost-effective attribute that is far away from the expensive SRAM. The research work suggests that to overcome these barriers it may then mean that memory manufacturing will take a total deviation from the present technologies and adopt one that overcomes the associated challenges with the traditional memory technologies.Keywords: cache, memory-hierarchy, memory, registers, storage
Procedia PDF Downloads 161917 Performance Assessment of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser during ‘Hot Standby’ Operation
Authors: M. J. Baum, B. Gibbes, A. Grinham, S. Albert, D. Gale, P. Fisher
Abstract:
Alongside the rapid expansion of Seawater Reverse Osmosis technologies there is a concurrent increase in the production of hypersaline brine by-products. To minimize environmental impact, these by-products are commonly disposed into open-coastal environments via submerged diffuser systems as inclined dense jet outfalls. Despite the widespread implementation of this process, diffuser designs are typically based on small-scale laboratory experiments under idealistic quiescent conditions. Studies concerning diffuser performance in the field are limited. A set of experiments were conducted to assess the near field characteristics of brine disposal at the Gold Coast Desalination Plant offshore multiport diffuser. The aim of the field experiments was to determine the trajectory and dilution characteristics of the plume under various discharge configurations with production ranging 66 – 100% of plant operative capacity. The field monitoring system employed an unprecedented static array of temperature and electrical conductivity sensors in a three-dimensional grid surrounding a single diffuser port. Complimenting these measurements, Acoustic Doppler Current Profilers were also deployed to record current variability over the depth of the water column and wave characteristics. Recorded data suggested the open-coastal environment was highly active over the experimental duration with ambient velocities ranging 0.0 – 0.5 m∙s-1, with considerable variability over the depth of the water column observed. Variations in background electrical conductivity corresponding to salinity fluctuations of ± 1.7 g∙kg-1 were also observed. Increases in salinity were detected during plant operation and appeared to be most pronounced 10 – 30 m from the diffuser, consistent with trajectory predictions described by existing literature. Plume trajectories and respective dilutions extrapolated from salinity data are compared with empirical scaling arguments. Discharge properties were found to adequately correlate with modelling projections. Temporal and spatial variation of background processes and their subsequent influence upon discharge outcomes are discussed with a view to incorporating the influence of waves and ambient currents in the design of brine outfalls into the future.Keywords: brine disposal, desalination, field study, negatively buoyant discharge
Procedia PDF Downloads 237