Search results for: fifth grade science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3739

Search results for: fifth grade science

469 Exploring the Synergistic Effects of Aerobic Exercise and Cinnamon Extract on Metabolic Markers in Insulin-Resistant Rats through Advanced Machine Learning and Deep Learning Techniques

Authors: Masoomeh Alsadat Mirshafaei

Abstract:

The present study aims to explore the effect of an 8-week aerobic training regimen combined with cinnamon extract on serum irisin and leptin levels in insulin-resistant rats. Additionally, this research leverages various machine learning (ML) and deep learning (DL) algorithms to model the complex interdependencies between exercise, nutrition, and metabolic markers, offering a groundbreaking approach to obesity and diabetes research. Forty-eight Wistar rats were selected and randomly divided into four groups: control, training, cinnamon, and training cinnamon. The training protocol was conducted over 8 weeks, with sessions 5 days a week at 75-80% VO2 max. The cinnamon and training-cinnamon groups were injected with 200 ml/kg/day of cinnamon extract. Data analysis included serum data, dietary intake, exercise intensity, and metabolic response variables, with blood samples collected 72 hours after the final training session. The dataset was analyzed using one-way ANOVA (P<0.05) and fed into various ML and DL models, including Support Vector Machines (SVM), Random Forest (RF), and Convolutional Neural Networks (CNN). Traditional statistical methods indicated that aerobic training, with and without cinnamon extract, significantly increased serum irisin and decreased leptin levels. Among the algorithms, the CNN model provided superior performance in identifying specific interactions between cinnamon extract concentration and exercise intensity, optimizing the increase in irisin and the decrease in leptin. The CNN model achieved an accuracy of 92%, outperforming the SVM (85%) and RF (88%) models in predicting the optimal conditions for metabolic marker improvements. The study demonstrated that advanced ML and DL techniques could uncover nuanced relationships and potential cellular responses to exercise and dietary supplements, which is not evident through traditional methods. These findings advocate for the integration of advanced analytical techniques in nutritional science and exercise physiology, paving the way for personalized health interventions in managing obesity and diabetes.

Keywords: aerobic training, cinnamon extract, insulin resistance, irisin, leptin, convolutional neural networks, exercise physiology, support vector machines, random forest

Procedia PDF Downloads 39
468 Empirical Studies of Indigenous Career Choice in Taiwan

Authors: Zichun Chu

Abstract:

The issue of tribal poverty has always attracted attentions. Due to social and economic difficulties, the indigenous people's personal development and tribal development have been greatly restricted. Past studies have pointed out that poverty may come from a lack of education. The United Nations Sustainable Development Goals (SDGs) also stated that if we are to solve the poverty problem, providing education widely is an important key. According to the theory of intellectual capital adaptation, “being capable” and “willing to do” are the keys of development. Therefore, we can say that the "ability" and "will" of tribal residents for their tribal development is the core concern of the tribal development. This research was designed to investigate the career choice development model of indigenous tribe people by investigating the current status of human capital, social capital, and cultural capital of tribal residents. This study collected 327 questionnaires (70% of total households) from Truku tribe to answer the research question: Did education help them for job choosing decisions from the aspects of human capital, social capital, and cultural capital in tribal status. This project highlighted the ‘single tribal research approach’ to gain an in-depth understanding of the human capital formed under the unique culture of the tribe (Truku tribe). The results show that the education level of most research participants was high school, very few high school graduates chose to further their education to college level; due to the lack of education of their parents, the social capital was limited to support them for jobs choice, most of them work for labor and service industries; however, their culture capital was comparably rich for works, the sharing culture of Taiwanese indigenous people made their work status stable. The results suggested that we should emphasize more on the development of vocational education based on the tribe’s location and resources. The self-advocacy of indigenous people should be developed so that they would gain more power on making career decisions. This research project is part of a pilot project called “INDIGENOUS PEOPLES, POVERTY, AND DEVELOPMENT,” sponsored by the National Science and Technology Council of Taiwan. If this paper were accepted to present in the 2023 ICIP, it would be lovely if a panel is formed for me and other co-researchers (Chuanju Cheng, Chih-Yuan Weng, and YiXuan Chen), for the audience will be able to get a full picture of this pilot project.

Keywords: career choices, career model, indegenous career development, indigenous education, tribe

Procedia PDF Downloads 82
467 Transdermal Delivery of Sodium Diclofenac from Palm Kernel Oil Esteres Nanoemulsions

Authors: Malahat Rezaee, Mahiran Basri, Abu Bakar Salleh, Raja Noor Zaliha Raja Abdul Rahman

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that has been progressively considered in pharmaceutical science for transdermal delivery of the drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using the surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils, contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research aimed to study the effect of terpene type and concentration on sodium diclofenac permeation from palm kernel oil esters nanoemulsions and physicochemical properties of the nanoemulsions systems. The effect of various terpenes of geraniol, menthone, menthol, cineol and nerolidol at different concentrations of 0.5, 1.0, 2.0, and 4.0% on permeation of sodium diclofenac were evaluated using Franz diffusion cells and rat skin as permeation membrane. The results of this part demonstrated that all terpenes showed promoting effect on sodium diclofenac penetration. However, menthol and menthone at all concentrations showed significant effects (<0.05) on drug permeation. The most outstanding terpene was menthol with the most significant effect for skin permeability of sodium diclofenac. The effect of terpenes on physicochemical properties of nanoemulsion systems was investigated on the parameters of particle size, zeta potential, pH, viscosity and electrical conductivity. The result showed that all terpenes had the significant effect on particle size and non-significant effects on the zeta potential of the nanoemulsion systems. The effect of terpenes was significant on pH, excluding the menthone at concentrations of 0.5 and 1.0%, and cineol and nerolidol at the concentration of 2.0%. Terpenes also had significant effect on viscosity of nanoemulsions exception of menthone and cineol at the concentration of 0.5%. The result of conductivity measurements showed that all terpenes at all concentration except cineol at the concentration of 0.5% represented significant effect on electrical conductivity.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, terpenes, skin permeation

Procedia PDF Downloads 421
466 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330
465 Demographic Determinants of Spatial Patterns of Urban Crime

Authors: Natalia Sypion-Dutkowska

Abstract:

Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.

Keywords: age groups, determinants of crime, spatial crime pattern, urban crime

Procedia PDF Downloads 171
464 Intersubjectivity of Forensic Handwriting Analysis

Authors: Marta Nawrocka

Abstract:

In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.

Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods

Procedia PDF Downloads 149
463 Hierarchical Zeolites as Catalysts for Cyclohexene Epoxidation Reactions

Authors: Agnieszka Feliczak-Guzik, Paulina Szczyglewska, Izabela Nowak

Abstract:

A catalyst-assisted oxidation reaction is one of the key reactions exploited by various industries. Their conductivity yields essential compounds and intermediates, such as alcohols, epoxides, aldehydes, ketones, and organic acids. Researchers are devoting more and more attention to developing active and selective materials that find application in many catalytic reactions, such as cyclohexene epoxidation. This reaction yields 1,2-epoxycyclohexane and 1,2-diols as the main products. These compounds are widely used as intermediates in the perfume industry and synthesizing drugs and lubricants. Hence, our research aimed to use hierarchical zeolites modified with transition metal ions, e.g., Nb, V, and Ta, in the epoxidation reaction of cyclohexene using microwaveheating. Hierarchical zeolites are materials with secondary porosity, mainly in the mesoporous range, compared to microporous zeolites. In the course of the research, materials based on two commercial zeolites, with Faujasite (FAU) and Zeolite Socony Mobil-5 (ZSM-5) structures, were synthesized and characterized by various techniques, such as X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and low-temperature nitrogen adsorption/desorption isotherms. The materials obtained were then used in a cyclohexene epoxidation reaction, which was carried out as follows: catalyst (0.02 g), cyclohexene (0.1 cm3), acetonitrile (5 cm3) and dihydrogen peroxide (0.085 cm3) were placed in a suitable glass reaction vessel with a magnetic stirrer inside in a microwave reactor. Reactions were carried out at 45° C for 6 h (samples were taken every 1 h). The reaction mixtures were filtered to separate the liquid products from the solid catalyst and then transferred to 1.5 cm3 vials for chromatographic analysis. The test techniques confirmed the acquisition of additional secondary porosity while preserving the structure of the commercial zeolite (XRD and low-temperature nitrogen adsorption/desorption isotherms). The results of the activity of the hierarchical catalyst modified with niobium in the cyclohexene epoxidation reaction indicate that the conversion of cyclohexene, after 6 h of running the process, is about 70%. As the main product of the reaction, 2-cyclohexanediol was obtained (selectivity > 80%). In addition to the mentioned product, adipic acid, cyclohexanol, cyclohex-2-en-1-one, and 1,2-epoxycyclohexane were also obtained. Furthermore, in a blank test, no cyclohexene conversion was obtained after 6 h of reaction. Acknowledgments The work was carried out within the project “Advanced biocomposites for tomorrow’s economy BIOG-NET,” funded by the Foundation for Polish Science from the European Regional Development Fund (POIR.04.04.00-00-1792/18-00.

Keywords: epoxidation, oxidation reactions, hierarchical zeolites, synthesis

Procedia PDF Downloads 78
462 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 52
461 Synthesis and Catalytic Activity of N-Heterocyclic Carbene Copper Catalysts Supported on Magnetic Nanoparticles

Authors: Iwona Misztalewska-Turkowicz, Agnieszka Z. Wilczewska, Karolina H. Markiewicz

Abstract:

Carbenes - species which possess neutral carbon atom with two shared and two unshared valence electrons, are known for their high reactivity and instability. Nevertheless, it is also known, that some carbenes i.e. N-heterocyclic carbenes (NHCs), can form stable crystals. The usability of NHCs in organic synthesis was studied. Due to their exceptional properties (high nucleophilicity) NHCs are commonly used as organocatalysts and also as ligands in transition metal complexes. NHC ligands possess better electron-donating properties than phosphines. Moreover, they exhibit lower toxicity. Due to these features, phosphines are frequently replaced by NHC ligands. In this research is discussed the synthesis of five-membered NHCs which are mainly obtained by deprotonation of azolium salts, e.g., imidazolium or imidazolinium salts. Some of them are immobilized on a solid support what leads to formation of heterogeneous, recyclable catalysts. Magnetic nanoparticles (MNPs) are often used as a solid support for catalysts. MNPs can be easily separated from the reaction mixture using an external magnetic field. Due to their low size and high surface to volume ratio, they are a good choice for immobilization of catalysts. Herein is presented synthesis of N-heterocyclic carbene copper complexes directly on the surface of magnetic nanoparticles. Formation of four different catalysts is discussed. They vary in copper oxidation state (Cu(I) and Cu(II)) and structure of NHC ligand. Catalysts were tested in Huisgen reaction, a type of copper catalyzed azide-alkyne cycloaddition (CuAAC) reaction. Huisgen reaction represents one of the few universal and highly efficient reactions in which 1,2,3-triazoles can be obtained. The catalytic activity of all synthesized catalysts was compared with activity of commercially available ones. Different reaction conditions (solvent, temperature, the addition of reductant) and reusability of the obtained catalysts were investigated and are discussed. The project was financially supported by National Science Centre, Poland, grant no. 2016/21/N/ST5/01316. Analyses were performed in Centre of Synthesis and Analyses BioNanoTechno of University of Bialystok. The equipment in the Centre of Synthesis and Analysis BioNanoTechno of University of Bialystok was funded by EU, as a part of the Operational Program Development of Eastern Poland 2007-2013, project: POPW.01.03.00-20-034/09-00 and POPW.01.03.00-20-004/11.

Keywords: N-heterocyclic carbenes, click reaction, magnetic nanoparticles, copper catalysts

Procedia PDF Downloads 157
460 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 352
459 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation

Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode

Abstract:

The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.

Keywords: construction project management, construction performance, incident analysis, systems thinking

Procedia PDF Downloads 131
458 Effect of Bonded and Removable Retainers on Occlusal Settling after Orthodontic Treatment: A Systematic Review and Meta-Analysis

Authors: Umair Shoukat Ali, Kamil Zafar, Rashna Hoshang Sukhia, Mubassar Fida, Aqeel Ahmed

Abstract:

Objective: This systematic review and meta-analysis aimed to summarize the effectiveness of bonded and removable retainers (Hawley and Essix retainer) in terms of improvement in occlusal settling (occlusal contact points/areas) after orthodontic treatment. Search Method: We searched the Cochrane Library, CINAHL Plus, PubMed, Web of Science, Orthodontic journals, and Google scholar for eligible studies. We included randomized control trials (RCT) along with Cohort studies. Studies that reported occlusal contacts/areas during retention with fixed bonded and removable retainers were included. To assess the quality of the RCTs Cochrane risk of bias tool was utilized, whereas Newcastle-Ottawa Scale was used for assessing the quality of cohort studies. Data analysis: The data analysis was limited to reporting mean values of occlusal contact points/areas with different retention methods. By utilizing the RevMan software V.5.3, a meta-analysis was performed for all the studies with the quantitative data. For the computation of the summary effect, a random effect model was utilized in case of high heterogeneity. I2 statistics were utilized to assess the heterogeneity among the selected studies. Results: We included 6 articles in our systematic review after scrutinizing 219 articles and eliminating them based on duplication, titles, and objectives. We found significant differences between fixed and removable retainers in terms of occlusal settling within the included studies. Bonded retainer (BR) allowed faster and better posterior tooth settling as compared to Hawley retainer (HR). However, HR showed good occlusal settling in the anterior dental arch. Essix retainer showed a decrease in occlusal contact during the retention phase. Meta-analysis showed no statistically significant difference between BR and removable retainers. Conclusions: HR allowed better overall occlusal settling as compared to other retainers in comparison. However, BR allowed faster settling in the posterior teeth region. Overall, there are insufficient high-quality RCTs to provide additional evidence, and further high-quality RCTs research is needed.

Keywords: orthodontic retainers, occlusal contact, Hawley, fixed, vacuum-formed

Procedia PDF Downloads 125
457 Potentiality of Litchi-Fodder Based Agroforestry System in Bangladesh

Authors: M. R. Zaman, M. S. Bari, M. Kajal

Abstract:

A field experiment was conducted at the Agroforestry and Environment Research Field, Hajee Mohammad Danesh Science and Technology University, Dinajpur during 2013 to investigate the potentiality of three napier fodder varieties under Litchi orchard. The experiment was consisted of 2 factors RCBD with 3 replications. Among the two factors, factor A was two production systems; S1= Litchi + fodder and S2 = Fodder (sole crop); another factor B was three napier varieties: V1= BARI Napier -1 (Bazra), V2= BARI Napier - 2 (Arusha) and V3= BARI Napier -3 (Hybrid). The experimental results revealed that there were significant variation among the varieties in terms of leaf growth and yield. The maximum number of leaf plant -1 was recorded in variety Bazra (V1) whereas the minimum number was recorded in hybrid variety (V3).Significantly the highest (13.75, 14.53 and14.84 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was also recorded in variety Bazra whereas the lowest (5.89, 6.36 and 9.11 tha-1 at 1st, 2nd v and 3rd harvest respectively) yield was in hybrid variety. Again, in case of production systems, there were also significant differences between the two production systems were founded. The maximum number of leaf plant -1 was recorded under Litchi based AGF system (T1) whereas the minimum was recorded in open condition (T2). Similarly, significantly the highest (12.00, 12.35 and 13.31 tha-1 at 1st, 2nd and 3rd harvest respectively) yield of napier was recorded under Litchi based AGF system where as the lowest (9.73, 10.47 and 11.66 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was recorded in open condition i.e. napier in sole cropping. Furthermore, the interaction effect of napier variety and production systems were also gave significant deviation result in terms of growth and yield. The maximum number of leaf plant -1 was recorded under Litchi based AGF systems with Bazra variety whereas the minimum was recorded in open condition with hybrid variety. The highest yield (14.42, 16.14 and 16.15 tha-1 at 1st, 2nd and 3rd harvest respectively) of napier was found under Litchi based AGF systems with Bazra variety. Significantly the lowest (5.33, 5.79 and 8.48 tha-1 at 1st, 2nd and 3rd harvest respectively) yield was found in open condition i.e. sole cropping with hybrid variety. In case of the quality perspective, the highest nutritive value (DM, ASH, CP, CF, EE, and NFE) was found in Bazra (V1) and the lowest value was found in hybrid variety (V3). Therefore, the suitability of napier production under Litchi based AGF system may be ranked as Bazra > Arusha > Hybrid variety. Finally, the economic analysis showed that maximum BCR (5.20) was found in the Litchi based AGF systems over sole cropping (BCR=4.38). From the findings of the taken investigation, it may be concluded that the cultivation of Bazra napier varieties in the floor of Litchi orchard ensures higher revenue to the farmers compared to its sole cropping.

Keywords: potentiality, Litchi, fodder, agroforestry

Procedia PDF Downloads 323
456 Leadership Lessons from Female Executives in the South African Oil Industry

Authors: Anthea Carol Nefdt

Abstract:

In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.

Keywords: petroleum industry, gender, feminism, leadership

Procedia PDF Downloads 162
455 National Branding through Education: South Korean Image in Romania through the Language Textbooks for Foreigners

Authors: Raluca-Ioana Antonescu

Abstract:

The paper treats about the Korean public diplomacy and national branding strategies, and how the Korean language textbooks were used in order to construct the Korean national image. The field research of the paper stands at the intersection between Linguistics and Political Science, while the problem of the research is the role of language and culture in national branding process. The research goal is to contribute to the literature situated at the intersection between International Relations and Applied Linguistics, while the objective is to conceptualize the idea of national branding by emphasizing a dimension which is not much discussed, and that would be the education as an instrument of the national branding and public diplomacy strategies. In order to examine the importance of language upon the national branding strategies, the paper will answer one main question, How is the Korean language used in the construction of national branding?, and two secondary questions, How are explored in literature the relations between language and national branding construction? and What kind of image of South Korea the language textbooks for foreigners transmit? In order to answer the research questions, the paper starts from one main hypothesis, that the language is an essential component of the culture, which is used in the construction of the national branding influenced by traditional elements (like Confucianism) but also by modern elements (like Western influence), and from two secondary hypothesis, the first one is that in the International Relations literature there are little explored the connections between language and national branding, while the second hypothesis is that the South Korean image is constructed through the promotion of a traditional society, but also a modern one. In terms of methodology, the paper will analyze the textbooks used in Romania at the universities which provide Korean Language classes during the three years program B.A., following the dialogs, the descriptive texts and the additional text about the Korean culture. The analysis will focus on the rank status difference, the individual in relation to the collectivity, the respect for the harmony, and the image of the foreigner. The results of the research show that the South Korean image projected in the textbooks convey the Confucian values and it does not emphasize the changes suffered by the society due to the modernity and globalization. The Westernized aspect of the Korean society is conveyed more in an informative way about the Korean international companies, Korean internal development (like the transport or other services), but it does not show the cultural changed the society underwent. Even if the paper is using the textbooks which are used in Romania as a teaching material, it could be used and applied at least to other European countries, since the textbooks are the ones issued by the South Korean language schools, which other European countries are using also.

Keywords: confucianism, modernism, national branding, public diplomacy, traditionalism

Procedia PDF Downloads 242
454 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions

Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka

Abstract:

In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).

Keywords: cytotoxicity, graphene oxide, monolayer, spheroid

Procedia PDF Downloads 125
453 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 60
452 The Decline of Islamic Influence in the Global Geopolitics

Authors: M. S. Riyazulla

Abstract:

Since the dawn of the 21st century, there has been a perceptible decline in Islamic supremacy in world affairs, apart from the gradual waning of the amiable relations and relevance of Islamic countries in the International political arena. For a long, Islamic countries have been marginalised by the superpowers in the global conflicting issues. This was evident in the context of their recent invasions and interference in Afghanistan, Syria, Iraq, and Libya. The leading International Islamic organizations like the Arab League, Organization of Islamic Cooperation, Gulf Cooperation Council, and Muslim World League did not play any prominent role there in resolving the crisis that ensued due to the exogenous and endogenous causes. Hence, there is a need for Islamic countries to create a credible International Islamic organization that could dictate its terms and shape a new Islamic world order. The prominent Islamic countries are divided on ideological and religious fault lines. Their concord is indispensable to enhance their image and placate the relations with other countries and communities. The massive boon of oil and gas could be synergistically utilised to exhibit their omnipotence and eminence through constructive ways. The prevailing menace of Islamophobia could be abated through syncretic messages, discussions, and deliberations by the sagacious Islamic scholars with the other community leaders. Presently, as Muslims are at a crossroads, a dynamic leadership could navigate the agitated Muslim community on the constructive path and herald political stability around the world. The present political disorder, chaos, and economic challenges necessities a paradigm shift in approach to worldly affairs. This could also be accomplished through the advancement in science and technology, particularly space exploration, for peaceful purposes. The Islamic world, in order to regain its lost preeminence, should rise to the occasion in promoting peace and tranquility in the world and should evolve a rational and human-centric solution to global disputes and concerns. As a splendid contribution to humanity and for amicable international relations, they should devote all their resources and scientific intellect towards space exploration and should safely transport man from the Earth to the nearest and most accessible cosmic body, the Moon, within one hundred years as the mankind is facing the existential threat on the planet.

Keywords: carboniferous period, Earth, extinction, fossil fuels, global leaders, Islamic glory, international order, life, marginalization, Moon, natural catastrophes

Procedia PDF Downloads 68
451 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
450 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
449 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 299
448 The Economic Burden of Mental Disorders: A Systematic Review

Authors: Maria Klitgaard Christensen, Carmen Lim, Sukanta Saha, Danielle Cannon, Finley Prentis, Oleguer Plana-Ripoll, Natalie Momen, Kim Moesgaard Iburg, John J. McGrath

Abstract:

Introduction: About a third of the world’s population will develop a mental disorder over their lifetime. Having a mental disorder is a huge burden in health loss and cost for the individual, but also for society because of treatment cost, production loss and caregivers’ cost. The objective of this study is to synthesize the international published literature on the economic burden of mental disorders. Methods: Systematic literature searches were conducted in the databases PubMed, Embase, Web of Science, EconLit, NHS York Database and PsychInfo using key terms for cost and mental disorders. Searches were restricted to 1980 until May 2019. The inclusion criteria were: (1) cost-of-illness studies or cost-analyses, (2) diagnosis of at least one mental disorder, (3) samples based on the general population, and (4) outcome in monetary units. 13,640 publications were screened by their title/abstract and 439 articles were full-text screened by at least two independent reviewers. 112 articles were included from the systematic searches and 31 articles from snowball searching, giving a total of 143 included articles. Results: Information about diagnosis, diagnostic criteria, sample size, age, sex, data sources, study perspective, study period, costing approach, cost categories, discount rate and production loss method and cost unit was extracted. The vast majority of the included studies were from Western countries and only a few from Africa and South America. The disorder group most often investigated was mood disorders, followed by schizophrenia and neurotic disorders. The disorder group least examined was intellectual disabilities, followed by eating disorders. The preliminary results show a substantial variety in the used perspective, methodology, costs components and outcomes in the included studies. An online tool is under development enabling the reader to explore the published information on costs by type of mental disorder, subgroups, country, methodology, and study quality. Discussion: This is the first systematic review synthesizing the economic cost of mental disorders worldwide. The paper will provide an important and comprehensive overview over the economic burden of mental disorders, and the output from this review will inform policymaking.

Keywords: cost-of-illness, health economics, mental disorders, systematic review

Procedia PDF Downloads 131
447 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 124
446 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 111
445 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 273
444 A Case Study on Experiences of Clinical Preceptors in the Undergraduate Nursing Program

Authors: Jacqueline M. Dias, Amina A Khowaja

Abstract:

Clinical education is one of the most important components of a nursing curriculum as it develops the students’ cognitive, psychomotor and affective skills. Clinical teaching ensures the integration of knowledge into practice. As the numbers of students increase in the field of nursing coupled with the faculty shortage, clinical preceptors are the best choice to ensure student learning in the clinical settings. The clinical preceptor role has been introduced in the undergraduate nursing programme. In Pakistan, this role emerged due to a faculty shortage. Initially, two clinical preceptors were hired. This study will explore clinical preceptors views and experiences of precepting Bachelor of Science in Nursing (BScN) students in an undergraduate program. A case study design was used. As case studies explore a single unit of study such as a person or very small number of subjects; the two clinical preceptors were fundamental to the study and served as a single case. Qualitative data were obtained through an iterative process using in depth interviews and written accounts from reflective journals that were kept by the clinical preceptors. The findings revealed that the clinical preceptors were dedicated to their roles and responsibilities. Another, key finding was that clinical preceptors’ prior knowledge and clinical experience were valuable assets to perform their role effectively. The clinical preceptors found their new role innovative and challenging; it was stressful at the same time. Findings also revealed that in the clinical agencies there were unclear expectations and role ambiguity. Furthermore, clinical preceptors had difficulty integrating theory into practice in the clinical area and they had difficulty in giving feedback to the students. Although this study is localized to one university, generalizations can be drawn from the results. The key findings indicate that the role of a clinical preceptor is demanding and stressful. Clinical preceptors need preparation prior to precepting students on clinicals. Also, institutional support is fundamental for their acceptance. This paper focuses on the views and experiences of clinical preceptors undertaking a newly established role and resonates with the literature. The following recommendations are drawn to strengthen the role of the clinical preceptors: A structured program for clinical preceptors is needed along with mentorship. Clinical preceptors should be provided with formal training in teaching and learning with emphasis on clinical teaching and giving feedback to students. Additionally, for improving integration of theory into practice, clinical modules should be provided ahead of the clinical. In spite of all the challenges, ten more clinical preceptors have been hired as the faculty shortage continues to persist.

Keywords: baccalaureate nursing education, clinical education, clinical preceptors, nursing curriculum

Procedia PDF Downloads 174
443 The Risk of Prioritizing Management over Education at Japanese Universities

Authors: Masanori Kimura

Abstract:

Due to the decline of the 18-year-old population, Japanese universities have a tendency to convert their form of employment from tenured positions to fixed-term positions for newly hired teachers. The advantage of this is that universities can be more flexible in their employment plans in case they fail to fill the enrollment of quotas of prospective students or they need to supplement teachers who can engage in other academic fields or research areas where new demand is expected. The most serious disadvantage of this, however, is that if secure positions cannot be provided to faculty members, there is the possibility that coherence of education and continuity of research supported by the university cannot be achieved. Therefore, the question of this presentation is as follows: Are universities aiming to give first priority to management, or are they trying to prioritize educational and research rather than management? To answer this question, the author examined the number of job offerings for college foreign language teachers posted on the JREC-IN (Japan Research Career Information Network, which is run by Japan Science and Technology Agency) website from April 2012 to October 2015. The results show that there were 1,002 and 1,056 job offerings for tenured positions and fixed-term contracts respectively, suggesting that, overall, today’s Japanese universities show a tendency to give first priority to management. More detailed examinations of the data, however, show that the tendency slightly varies depending on the types of universities. National universities which are supported by the central government and state universities which are supported by local governments posted more job offerings for tenured positions than for fixed-term contracts: national universities posted 285 and 257 job offerings for tenured positions and fixed-term contracts respectively, and state universities posted 106 and 86 job offerings for tenured positions and fixed-term contracts respectively. Yet the difference in number between the two types of employment status at national and state universities is marginal. As for private universities, they posted 713 job offerings for fixed-term contracts and 616 offerings for tenured positions. Moreover, 73% of the fixed-term contracts were offered for low rank positions including associate professors, lectures, and so forth. Generally speaking, those positions are offered to younger teachers. Therefore, this result indicates that private universities attempt to cut their budgets yet expect the same educational effect by hiring younger teachers. Although the results have shown that there are some differences in personal strategies among the three types of universities, the author argues that all three types of universities may lose important human resources that will take a pivotal role at their universities in the future unless they urgently review their employment strategies.

Keywords: higher education, management, employment status, foreign language education

Procedia PDF Downloads 134
442 Delving into the Concept of Social Capital in the Smart City Research

Authors: Atefe Malekkhani, Lee Beattie, Mohsen Mohammadzadeh

Abstract:

Unprecedented growth of megacities and urban areas all around the world have resulted in numerous risks, concerns, and problems across various aspects of urban life, including environmental, social, and economic domains like climate change, spatial and social inequalities. In this situation, ever-increasing progress of technology has created a hope for urban authorities that the negative effects of various socio-economic and environmental crises can potentially be mitigated with the use of information and communication technologies. The concept of 'smart city' represents an emerging solution to urban challenges arising from increased urbanization using ICTs. However, smart cities are often perceived primarily as technological initiatives and are implemented without considering the social and cultural contexts of cities and the needs of their residents. The implementation of smart city projects and initiatives has the potential to (un)intentionally exacerbate pre-existing social, spatial, and cultural segregation. Investigating the impact of smart city on social capital of people who are users of smart city systems and with governance as policymakers is worth exploring. The importance of inhabitants to the existence and development of smart cities cannot be overlooked. This concept has gained different perspectives in the smart city studies. Reviewing the literature about social capital and smart city show that social capital play three different roles in smart city development. Some research indicates that social capital is a component of a smart city and has embedded in its dimensions, definitions, or strategies, while other ones see it as a social outcome of smart city development and point out that the move to smart cities improves social capital; however, in most cases, it remains an unproven hypothesis. Other studies show that social capital can enhance the functions of smart cities, and the consideration of social capital in planning smart cities should be promoted. Despite the existing theoretical and practical knowledge, there is a significant research gap reviewing the knowledge domain of smart city studies through the lens of social capital. To shed light on this issue, this study aims to explore the domain of existing research in the field of smart city through the lens of social capital. This research will use the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA) method to review relevant literature, focusing on the key concepts of 'Smart City' and 'Social Capital'. The studies will be selected Web of Science Core Collection, using a selection process that involves identifying literature sources, screening and filtering studies based on titles, abstracts, and full-text reading.

Keywords: smart city, urban digitalisation, ICT, social capital

Procedia PDF Downloads 14
441 Teaching Ethnic Relations in Social Work Education: A Study of Teachers' Strategies and Experiences in Sweden

Authors: Helene Jacobson Pettersson, Linda Lill

Abstract:

Demographic changes and globalization in society provide new opportunities for social work and social work education in Sweden. There has been an ambition to include these aspects into the Swedish social work education. However, the Swedish welfare state standard continued to be as affectionate as invisible starting point in discussions about people’s way of life and social problems. The aim of this study is to explore content given to ethnic relations in social work in the social work education in Sweden. Our standpoint is that the subject can be understood both from individual and structural levels, it changes over time, varies in different steering documents and differs from the perspectives of teachers and students. Our question is what content is given to ethnic relations in social work by the teachers in their strategies and teaching material. The study brings together research in the interface between education science, social work and research of international migration and ethnic relations. The presented narratives are from longer interviews with a total of 17 university teachers who teach in social work program at four different universities in Sweden. The universities have in different ways a curriculum that involves the theme of ethnic relations in social work, and the interviewed teachers are teaching and grading social workers on specific courses related to ethnic relations at undergraduate and graduate levels. Overall assesses these 17 teachers a large number of students during a semester. The questions were concerned on how the teachers handle ethnic relations in education in social work. The particular focus during the interviews has been the teacher's understanding of the documented learning objectives and content of literature and how this has implications for their teaching. What emerges is the teachers' own stories about the educational work and how they relate to the content of teaching, as well as the teaching strategies they use to promote the theme of ethnic relations in social work education. The analysis of this kind of pedagogy is that the teaching ends up at an individual level with a particular focus on the professional encounter with individuals. We can see the shortage of a critical analysis of the construction of social problems. The conclusion is that individual circumstance precedes theoretical perspective on social problems related to migration, transnational relations, globalization and social. This result has problematic implications from the perspective of sustainability in terms of ethnic diversity and integration in society. Thus these aspects have most relevance for social workers’ professional acting in social support and empowerment related activities, in supporting the social status and human rights and equality for immigrants.

Keywords: ethnic relations in Swedish social work education, teaching content, teaching strategies, educating for change, human rights and equality

Procedia PDF Downloads 248
440 Dogs Chest Homogeneous Phantom for Image Optimization

Authors: Maris Eugênia Dela Rosa, Ana Luiza Menegatti Pavan, Marcela De Oliveira, Diana Rodrigues De Pina, Luis Carlos Vulcano

Abstract:

In medical veterinary as well as in human medicine, radiological study is essential for a safe diagnosis in clinical practice. Thus, the quality of radiographic image is crucial. In last year’s there has been an increasing substitution of image acquisition screen-film systems for computed radiology equipment (CR) without technical charts adequacy. Furthermore, to carry out a radiographic examination in veterinary patient is required human assistance for restraint this, which can compromise image quality by generating dose increasing to the animal, for Occupationally Exposed and also the increased cost to the institution. The image optimization procedure and construction of radiographic techniques are performed with the use of homogeneous phantoms. In this study, we sought to develop a homogeneous phantom of canine chest to be applied to the optimization of these images for the CR system. In carrying out the simulator was created a database with retrospectives chest images of computed tomography (CT) of the Veterinary Hospital of the Faculty of Veterinary Medicine and Animal Science - UNESP (FMVZ / Botucatu). Images were divided into four groups according to the animal weight employing classification by sizes proposed by Hoskins & Goldston. The thickness of biological tissues were quantified in a 80 animals, separated in groups of 20 animals according to their weights: (S) Small - equal to or less than 9.0 kg, (M) Medium - between 9.0 and 23.0 kg, (L) Large – between 23.1 and 40.0kg and (G) Giant – over 40.1 kg. Mean weight for group (S) was 6.5±2.0 kg, (M) 15.0±5.0 kg, (L) 32.0±5.5 kg and (G) 50.0 ±12.0 kg. An algorithm was developed in Matlab in order to classify and quantify biological tissues present in CT images and convert them in simulator materials. To classify tissues presents, the membership functions were created from the retrospective CT scans according to the type of tissue (adipose, muscle, bone trabecular or cortical and lung tissue). After conversion of the biologic tissue thickness in equivalent material thicknesses (acrylic simulating soft tissues, bone tissues simulated by aluminum and air to the lung) were obtained four different homogeneous phantoms, with (S) 5 cm of acrylic, 0,14 cm of aluminum and 1,8 cm of air; (M) 8,7 cm of acrylic, 0,2 cm of aluminum and 2,4 cm of air; (L) 10,6 cm of acrylic, 0,27 cm of aluminum and 3,1 cm of air and (G) 14,8 cm of acrylic, 0,33 cm of aluminum and 3,8 cm of air. The developed canine homogeneous phantom is a practical tool, which will be employed in future, works to optimize veterinary X-ray procedures.

Keywords: radiation protection, phantom, veterinary radiology, computed radiography

Procedia PDF Downloads 418