Search results for: applied science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10600

Search results for: applied science

1420 Barriers and Challenges to a Healthy Lifestyle for Postpartum Women and the Possibilities in an Information Technology-Based Intervention: A Qualitative Study

Authors: Pernille K. Christiansen, Mette Maria Skjøth, Line Lorenzen, Eva Draborg, Christina Anne Vinter, Trine Kjær, Mette Juel Rothmann

Abstract:

Background and aims: Overweight and obesity are an increasing challenge on a global level. In Denmark, more than one-third of all pregnant women are overweight or obese, and many women exceed the gestational weight gain recommendations from the Institute of Medicine. Being overweight or obese, is associated with a higher risk of adverse maternal and fetal outcomes, including gestational diabetes and childhood obesity. Thus, it is important to focus on the women’s lifestyles between their pregnancies to lower the risk of gestational weight retention in the long run. The objective of this study was to explorer what barriers and challenges postpartum women experience with respect to healthy lifestyles during the postpartum period and to access whether an Information Technology based intervention might be a supportive tool to assist and motivate postpartum women to a healthy lifestyle. Materials and methods: The method is inspired by participatory design. A systematic text condensation was applied to semi-structured focus groups. Five focus group interviews were carried out with a total of 17 postpartum women and two interviews with a total of six health professionals. Participants were recruited through the municipality in Svendborg, Denmark, and at Odense University Hospital in Odense, Denmark, during a four-month period in early 2018. Results: From the women’s perspective, better assistance is needed from the health professionals to obtain or maintain a healthy lifestyle. The women need tools that inform and help them understand and prioritise their own health-related risks, and to motivate them to plan and take care of their own health. As the women use Information Technology on a daily basis, the solution could be delivered through Information Technology. Finally, there is room for engaging the partner more in the communication related to the baby and family’s lifestyle. Conclusion: Postpartum women need tools that inform and motivate a healthy lifestyle postpartum. The tools should allow access to high-quality information from health care professionals, when the information is needed, and also allow engagement from the partner. Finally, Information Technology is a potential tool for delivering tools.

Keywords: information technology, lifestyle, overweight, postpartum

Procedia PDF Downloads 143
1419 Growth and Differentiation of Mesenchymal Stem Cells on Titanium Alloy Ti6Al4V and Novel Beta Titanium Alloy Ti36Nb6Ta

Authors: Eva Filová, Jana Daňková, Věra Sovková, Matej Daniel

Abstract:

Titanium alloys are biocompatible metals that are widely used in clinical practice as load bearing implants. The chemical modification may influence cell adhesion, proliferation, and differentiation as well as stiffness of the material. The aim of the study was to evaluate the adhesion, growth and differentiation of pig mesenchymal stem cells on the novel beta titanium alloy Ti36Nb6Ta compared to standard medical titanium alloy Ti6Al4V. Discs of Ti36Nb6Ta and Ti6Al4V alloy were sterilized by ethanol, put in 48-well plates, and seeded by pig mesenchymal stem cells at the density of 60×103/cm2 and cultured in Minimum essential medium (Sigma) supplemented with 10% fetal bovine serum and penicillin/streptomycin. Cell viability was evaluated using MTS assay (CellTiter 96® AQueous One Solution Cell Proliferation Assay;Promega), cell proliferation using Quant-iT™ ds DNA Assay Kit (Life Technologies). Cells were stained immunohistochemically using monoclonal antibody beta-actin, and secondary antibody conjugated with AlexaFluor®488 and subsequently the spread area of cells was measured. Cell differentiation was evaluated by alkaline phosphatase assay using p-nitrophenyl phosphate (pNPP) as a substrate; the reaction was stopped by NaOH, and the absorbance was measured at 405 nm. Osteocalcin, specific bone marker was stained immunohistochemically and subsequently visualized using confocal microscopy; the fluorescence intensity was analyzed and quantified. Moreover, gene expression of osteogenic markers osteocalcin and type I collagen was evaluated by real-time reverse transcription-PCR (qRT-PCR). For statistical evaluation, One-way ANOVA followed by Student-Newman-Keuls Method was used. For qRT-PCR, the nonparametric Kruskal-Wallis Test and Dunn's Multiple Comparison Test were used. The absorbance in MTS assay was significantly higher on titanium alloy Ti6Al4V compared to beta titanium alloy Ti36Nb6Ta on days 7 and 14. Mesenchymal stem cells were well spread on both alloys, but no difference in spread area was found. No differences in alkaline phosphatase assay, fluorescence intensity of osteocalcin as well as the expression of type I collagen, and osteocalcin genes were observed. Higher expression of type I collagen compared to osteocalcin was observed for cells on both alloys. Both beta titanium alloy Ti36Nb6Ta and titanium alloy Ti6Al4V Ti36Nb6Ta supported mesenchymal stem cellsˈ adhesion, proliferation and osteogenic differentiation. Novel beta titanium alloys Ti36Nb6Ta is a promising material for bone implantation. The project was supported by the Czech Science Foundation: grant No. 16-14758S, the Grant Agency of the Charles University, grant No. 1246314 and by the Ministry of Education, Youth and Sports NPU I: LO1309.

Keywords: beta titanium, cell growth, mesenchymal stem cells, titanium alloy, implant

Procedia PDF Downloads 314
1418 The Contribution of Boards to Company Performance via Strategic Management

Authors: Peter Crow

Abstract:

Boards and directors have been subjects of much scholarly research and public interest over several decades, more so since the succession of high profile company failures of the early 2000s. An array of research outputs including information, correlations, descriptions, models, hypotheses and theories have been reported. While some of this research has shed light on aspects of the board–performance relationship and on board tasks and behaviours, the nature and characteristics of the supposed board–performance relationship remain undetermined. That satisfactory explanations of how boards influence company performance have yet to emerge is a significant blind spot. Yet the board is ultimately responsible for company performance, in accordance with the wishes of shareholders. The aim of this paper is to explore corporate governance and board practice through the lens of strategic management, and to take tentative steps towards a new conception of corporate governance. The findings of a recent longitudinal multiple-case study designed to explore the board’s involvement in strategic management are reported. Qualitative and quantitative data was collected from two quasi-public large companies in New Zealand including from first-hand observations of boards in session, semi-structured interviews with chief executives and chairmen and the inspection of company and board documentation. A synthetic timeline framework was used to collate the financial, board structure, board activity and decision-making data, in order to provide a holistic perspective. Decision sequences were identified, and realist techniques of abduction and retroduction were iteratively applied to analyse the multi-year data set. Using several models previously proposed in the literature as a guide, conjectures were formed, tested and refined—the culmination of which was a provisional model of how boards can influence performance via strategic management. The model builds on both existing theoretical perspectives and theoretical models proposed in the corporate governance and strategic management literature. This paper seeks to add to the understanding of how boards can make meaningful contributions to value creation via strategic management, and to comment on the qualities of directors, social interactions in boardrooms and other circumstances within which influence might be possible given the highly contingent relationship between board activity and business performance outcomes.

Keywords: board practice, case study, corporate governance, strategic management

Procedia PDF Downloads 221
1417 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 68
1416 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 125
1415 Children Asthma; The Role of Molecular Pathways and Novel Saliva Biomarkers Assay

Authors: Seyedahmad Hosseini, Mohammadjavad Sotoudeheian

Abstract:

Introduction: Allergic asthma is a heterogeneous immuno-inflammatory disease based on Th-2-mediated inflammation. Histopathologic abnormalities of the airways characteristic of asthma include epithelial damage and subepithelial collagen deposition. Objectives: Human bronchial epithelial cell genome expression of TNF‑α, IL‑6, ICAM‑1, VCAM‑1, nuclear factor (NF)‑κB signaling pathways up-regulate during inflammatory cascades. Moreover, immunofluorescence assays confirmed the nuclear translocation of NF‑κB p65 during inflammatory responses. An absolute LDH leakage assays suggestedLPS-inducedcells injury, and the associated mechanisms are co-incident events. LPS-induced phosphorylation of ERKand JNK causes inflammation in epithelial cells through inhibition of ERK and JNK activation and NF-κB signaling pathway. Furthermore, the inhibition of NF-κB mRNA expression and the nuclear translocation of NF-κB lead to anti-inflammatory events. Likewise, activation of SUMF2 which inhibits IL-13 and reduces Th2-cytokines, NF-κB, and IgE levels to ameliorate asthma. On the other hand, TNFα-induced mucus production reduced NF-κB activation through inhibition of the activation status of Rac1 and IκBα phosphorylation. In addition, bradykinin B2 receptor (B2R), which mediates airway remodeling, regulates through NF-κB. Bronchial B2R expression is constitutively elevated in allergic asthma. In addition, certain NF-κB -dependent chemokines function to recruit eosinophils in the airway. Besides, bromodomain containing 4 (BRD4) plays a significant role in mediating innate immune response in human small airway epithelial cells as well as transglutaminase 2 (TG2), which is detectable in saliva. So, the guanine nucleotide-binding regulatory protein α-subunit, Gα16, expresses a κB-driven luciferase reporter. This response was accompanied by phosphorylation of IκBα. Furthermore, expression of Gα16 in saliva markedly enhanced TNF-α-induced κB reporter activity. Methods: The applied method to form NF-κB activation is the electromobility shift assay (EMSA). Also, B2R-BRD4-TG2 complex detection by immunoassay method within saliva with EMSA of NF-κB activation may be a novel biomarker for asthma diagnosis and follow up. Conclusion: This concept introduces NF-κB signaling pathway as potential asthma biomarkers and promising targets for the development of new therapeutic strategies against asthma.

Keywords: NF-κB, asthma, saliva, T-helper

Procedia PDF Downloads 92
1414 Sustainable Organization for Sustainable Strategy: An Empirical Evidence

Authors: Lucia Varra, Marzia Timolo

Abstract:

The interest of scholars towards corporate sustainability has strengthened in recent years in parallel with the growing need to undertake paths of cultural and organizational change, as a way for greater competitiveness and stakeholders’ satisfaction. In fact, studies on the business sustainability, while on the one hand have integrated the three dimensions of sustainability that existed for some time in the economic approaches (economic, environmental and social dimensions), on the other hand did not give rise to an organic construct that puts together the aspects of strategic management with corporate social responsibility and even less with the organizational issues. Therefore some important questions remain open: Which organizational structure and which operational mechanisms are coherent or propitious to a sustainability strategy? Existing studies appear to be fragmented, although some aspects have shared importance: knowledge management, human resource, management, leadership, innovation, etc. The construction of a model of sustainable organization that supports the sustainability strategy no longer seems to be postponed, as is its connection with the main practices of measuring corporate social responsibility performance. The paper aims to identify the organizational characteristics of a sustainable corporate. To this end, from a theoretical point of view the work examines the main existing literary contributions and, from a practical point of view, it presents a business case referring to a service organization that for years has undertaken the sustainability strategy. This paper is divided into two parts: the first part concerns a review of the main articles on the strategic management topic and the main organizational issues raised by the literature, such as knowledge management, leadership, innovation, etc.; later, a modeling of the main variables examined by scholars and an integration of these with the international measurement standards of CSR is proposed. In the second part, using the methodology of the case study company, the hypotheses and the structure of the proposed model that aims to integrate the strategic issues with the organizational aspects and measurement of sustainability performance, are applied to an Italian company, which has some organizational and human resource management interventions are in place to align strategic decisions with the structure and operating mechanisms of the structure. The case presented supports the hypotheses of the model.

Keywords: CSR, strategic management, sustainable leadership, sustainable human resource management, sustainable organization

Procedia PDF Downloads 98
1413 Young People’s Perceptions of Disability: The New Generation’s View of a Public Seen as Vulnerable and Marginalized

Authors: Ulysse Lecomte, Maryline Thenot

Abstract:

For a long time, disabled people lived in isolation within the family environment, with little interaction with the outside world and a high risk of social exclusion. However, in a number of countries, progress has been made thanks to changes in legislation on the social integration of disabled people, a significant change in attitudes, and the development of CSR. But the problem of their social, economic, and professional exclusion persists and has been further exacerbated by the COVID-19 pandemic. This societal phenomenon is sufficiently important to be the subject of management science research. We have therefore focused our work on society's current perception of people with disabilities and their possible integration. Our aim is to find out what levers could be put in place to bring about positive change in the situation. We have chosen to focus on the perception of young people in France, who are the new generation responsible for the future of our society and from whom tomorrow's decisionmakers, future employers, and stakeholders who can influence the living conditions of disabled people will be drawn. Our study sample corresponds to the 18-30 age group, which is the population of young adults likely to have sufficient experience and maturity. The aim of this study is not only to find out how this population currently perceives disability but also to identify the factors influencing this perception and the most effective levers for action to act positively on this phenomenon and thus promote better social integration of people with disabilities in the future. The methodology is based on theoretical and empirical research. The literature review includes a historical and etymological approach to disability, a definition of the different concepts of disability, an approach to disability as a vector of social exclusion, and the role of perception and representations in defining the social image of disability. This literature review is followed by an empirical part carried out by means of a questionnaire administered to 110 young people aged 18 to 30. Analysis of our results suggests that, despite a recent improvement, disabled people are still perceived as vulnerable and socially marginalised. The following factors stand out as having a significant influence (positive or negative) on the perception of disability: the individual's familiarity with the 'world of disability', cultural factors, the degree of 'visibility' of the disability and the empathy level of the disabled person him/herself. Others, on the other hand, such as socio-political and economic factors, have little impact on this perception. In addition, it is possible to classify the various levers of action likely to improve the social perception of disability according to their degree of effectiveness. Our study population prioritised training initiatives for the various players and stakeholders (teachers, students, disabled people themselves, companies, sports clubs, etc.). This was followed by communication, ecommunication and media campaigns in favour of disability. Lastly, the sample was judged as 'less effective' positive discrimination actions such as setting a minimum percentage for the representation of disabled people in various fields (studies, employment, politics ...).

Keywords: disability, perception, social image, young people, influencing factors, levers for action

Procedia PDF Downloads 30
1412 The Role of Professional Teacher Development in Introducing Trilingual Education into the Secondary School Curriculum: Lessons from Kazakhstan, Central Asia

Authors: Kairat Kurakbayev, Dina Gungor, Adil Ashirbekov, Assel Kambatyrova

Abstract:

Kazakhstan, a post-Soviet economy located in the Central Asia, is making great efforts to internationalize its national system of education. The country is very ambitious in making the national economy internationally competitive and education has become one of the main pillars of the nation’s strategic development plan for 2030. This paper discusses the role of professional teacher development in upgrading the secondary education curriculum with the introduction of English as a medium of instruction (EMI) in grades 10-11 grades. Having Kazakh as the state language and Russian as the official language, English bears a status of foreign language in the country. The development of trilingual education is very high on the agenda of the Ministry of Education and Science. It is planned that by 2019 STEM-related subjects – Biology, Chemistry, Computing and Physics – will be taught in EMI. Introducing English-medium education appears to be a very drastic reform and the teaching cadre is the key driver here. At the same time, after the collapse of the Soviet Union, the teaching profession is still struggling to become attractive in the eyes of the local youth. Moreover, the quality of Kazakhstan’s secondary education is put in question by OECD national review reports. The paper presents a case study of the nation-wide professional development programme arranged for 5 010 school teachers so that they could be able to teach their content subjects in English starting from 2019 onwards. The study is based on the mixed methods research involving the data derived from the surveys and semi-structured interviews held with the programme participants, i.e. school teachers. The findings of the study imply the significance of the school teachers’ attitudes towards the top-down reform of trilingual education. The qualitative research data reveal the teachers’ beliefs about advantages and disadvantages of having their content subjects (e.g. Biology or Chemistry) taught in EMI. The study highlights teachers’ concerns about their professional readiness to implement the top-down reform of English-medium education and discusses possible risks of academic underperforming on the part of students whose English language proficiency is not advanced. This paper argues that for the effective implementation of the English-medium education in secondary schools, the state should adopt a comprehensive approach to upgrading the national academic system where teachers’ attitudes and beliefs play the key role in making the trilingual education policy effective. The study presents lessons for other national academic systems considering to transfer its secondary education to English as a medium of instruction.

Keywords: teacher education, teachers' beliefs, trilingual education, case study

Procedia PDF Downloads 173
1411 Electroactivity of Clostridium saccharoperbutylacetonicum 1-4N during Carbon Dioxide Reduction in a Bioelectrosynthesis System

Authors: Carlos A. Garcia-Mogollon, Juan C. Quintero-Diaz, Claudio Avignone-Rossa

Abstract:

Clostridium saccharoperbutylacetonicum 1-4N (Csb 1-4N) is an industrial reference strain for Acetone-Butanol-Ethanol (ABE) fermentation. Csb 1-4N is a solventogenic clostridium and H₂ producer with a metabolic profile that makes it a good candidate for Bioelectrosynthesis System (BES). The aim of this study was to evaluate the electroactivity of Csb 1-4N by cyclic voltammetry technique (CV). The Bioelectrosynthesis fermentation (BES) started in a Triptone-Yeast extract (TY) medium with trace elements and vitamins, Complex Nitrogen Source (CNS), and bicarbonate (NaHCO₃, 4g/L) as a carbon source, run at -600mVAg/AgCl and adding 200uM NADH. The six BES batches were performed with different media composition with and without NADH, CNS, HCO₃⁻ , and applied potential. The CV was performed as three-electrode system: platinum slice working electrode (WE), nickel contra electrode (CE) and reference electrode Ag/AgCl (ER). CVs were run in a potential range of -0.7V to 0.7V vs. VAg/AgCl at a scan rate 10mV/s. A CV recorded using different NaHCO₃ concentrations (0.25; 0.5; 1.0; 4g/L) were obtained. BES fermentation samples were centrifuged (3000 rpm, 5min, 4C), and supernatant (7mL) was used. CVs were obtained for Csb1-4N BES culture cell-free supernatant at 0h, 24h, and 48h. The electrochemical analysis was carried out with a PalmSens 4.0 potentiostat/galvanostat controlled with the PStrace 5.7 software, and CVs curves were characterized by reduction and oxidation currents and reduction and oxidation peaks. The CVs obtained for NaHCO₃ solutions showed that the reduction current and oxidation current decreased as the NaHCO₃ concentration was decreased. All reduction and oxidation currents decreased until exponential growth stop (24h), independence of initial cathodic current, except in medium with trace elements, vitamins, and NaHCO3, in which reduction current was around half at 24h and followed decreasing at 48. In this medium, Csb1-4N did not grow, but pH was increased, indicating that NaHCO₃ was reduced as the reduction current decreased. In general, at 48h reduction currents did not present important changes between different mediums in BES cultures. In terms of intensities in the peaks (Ip) did not present important variations; except with Ipa and Ipc in BES culture with NaHCO₃ and NADH added are higher than peaks in other cultures. Based on results, cathodic and anodic currents changes were induced by NaHCO₃ reduction reactions during Csb1-4N metabolic activity in different BES experiments.

Keywords: clostridium saccharoperbutylacetonicum 1-4N, bioelectrosynthesis, carbon dioxide fixation, cyclic voltammetry

Procedia PDF Downloads 130
1410 Calcein Release from Liposomes Mediated by Phospholipase A₂ Activity: Effect of Cholesterol and Amphipathic Di and Tri Blocks Copolymers

Authors: Marco Soto-Arriaza, Eduardo Cena-Ahumada, Jaime Melendez-Rojel

Abstract:

Background: Liposomes have been widely used as a model of lipid bilayer to study the physicochemical properties of biological membrane, encapsulation, transport and release of different molecules. Furthermore, extensive research has focused on improving the efficiency in the transport of drugs, developing tools that improve the release of the encapsulated drug from liposomes. In this context, the enzymatic activity of PLA₂, despite having been shown to be an effective tool to promote the release of drugs from liposomes, is still an open field of research. Aim: The aim of the present study is to explore the effect of cholesterol (Cho) and amphipathic di- and tri-block copolymers, on calcein release mediated by enzymatic activity of PLA2 in Dipalmitoylphosphatidylcholine (DPPC) liposomes under physiological conditions. Methods: Different dispersions of DPPC, cholesterol, di-block POE₄₅-PCL₅₂ or tri-block PCL₁₂-POE₄₅-PCL₁₂ were prepared by the extrusion method after five freezing/thawing cycles; in Phosphate buffer 10mM pH 7.4 in presence of calcein. DPPC liposomes/Calcein were centrifuged at 15000rpm 10 min to separate free calcein. Enzymatic activity assays of PLA₂ were performed at 37°C using the TBS buffer pH 7.4. The size distribution, polydispersity, Z-potential and Calcein encapsulation of DPPC liposomes was monitored. Results: PLA₂ activity showed a slower kinetic of calcein release up to 20 mol% of cholesterol, evidencing a minimum at 10 mol% and then a maximum at 18 mol%. Regardless of the percentage of cholesterol, up to 18 mol% a one-hundred percentage release of calcein was observed. At higher cholesterol concentrations, PLA₂ showed to be inefficient or not to be involved in calcein release. In assays where copolymers were added in a concentration lower than their cmc, a similar behavior to those showed in the presence of Cho was observed, that is a slower kinetic in calcein release. In both experimental approaches, a one-hundred percentage of calcein release was observed. PLA₂ was shown to be sensitive to the 4-(4-Octadecylphenyl)-4-oxobutenoic acid inhibitor and calcium, reducing the release of calcein to 0%. Cell viability of HeLa cells decreased 7% in the presence of DPPC liposomes after 3 hours of incubation and 17% and 23% at 5 and 15 hours, respectively. Conclusion: Calcein release from DPPC liposomes, mediated by PLA₂ activity, depends on the percentage of cholesterol and the presence of copolymers. Both, cholesterol up to 20 mol% and copolymers below it cmc could be applied to the regulation of the kinetics of antitumoral drugs release without inducing cell toxicity per se.

Keywords: amphipathic copolymers, calcein release, cholesterol, DPPC liposome, phospholipase A₂

Procedia PDF Downloads 157
1409 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 234
1408 Use of Pig as an Animal Model for Assessing the Differential MicroRNA Profiling in Kidney after Aristolochic Acid Intoxication

Authors: Daniela E. Marin, Cornelia Braicu, Gina C. Pistol, Roxana Cojocneanu-Petric, Ioana Berindan Neagoe, Mihail A. Gras, Ionelia Taranu

Abstract:

Aristolochic acid (AA) is a carcinogenic, mutagenic, and nephrotoxic compound commonly found in the Aristolochiaceae family of plants. AA is frequently associated with urothelial carcinoma of the upper urinary tract in human and animals and is considered as being responsible for Balkan Endemic Nephropathy. The pig provides a good animal model because the porcine urological system is very similar to that of humans, both in aspects of physiology and anatomy. MicroRNA (miRNA) are small non-coding RNAs that have an impact on a wide range of biological processes by regulating gene expression at post-transcriptional level. The objective of this study was to analyze the miRNA profiling in the kidneys of AA intoxicated swine. For this purpose, ten TOPIGS-40 crossbred weaned piglets, 4-week-old, male and females with an initial average body weight of 9.83 ± 0.5 kg were studied for 28 days. They were given ad libitum access to water and feed and randomly allotted to one of the following groups: control group (C) or aristolochic acid group (AA). They were fed a maize-soybean-meal-based diet contaminated or not with 0.25mgAA/kg. To profile miRNA in the kidneys of pigs, microarrays and bioinformatics approaches were applied to analyze the miRNA in the kidney of control and AA intoxicated pigs. After normalization, our results have shown that a total of 5 known miRNAs and 4 novel miRNAs had different profiling in the kidney of intoxicated animals versus control ones. Expression of miR-32-5p, miR-497-5p, miR-423-3p, miR-218-5p, miR-128-3p were up-regulated by 0.25mgAA/kg feed, while the expression of miR-9793-5p, miR-9835-3p, miR-9840-3p, miR-4334-5p was down-regulated. The microRNA profiling in kidney of intoxicated animals was associated with modified expression of target genes as: RICTOR, LASP1, SFRP2, DKK2, BMI1, RAF1, IGF1R, MAP2K1, WEE1, HDGF, BCL2, EIF4E etc, involved in cell division cycle, apoptosis, cell differentiation and cell migration, cell signaling, cancer etc. In conclusion, this study provides new data concerning the microRNA profiling in kidney after aristolochic acid intoxications with important implications for human and animal health.

Keywords: aristolochic acid, kidney, microRNA, swine

Procedia PDF Downloads 278
1407 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 71
1406 Cross-Cultural Collaboration Shaping Co-Creation Methodology to Enhance Disaster Risk Management Approaches

Authors: Jeannette Anniés, Panagiotis Michalis, Chrysoula Papathanasiou, Selby Knudsen

Abstract:

RiskPACC project aims to bring together researchers, practitioners, and first responders from nine European countries following a co-creation approach aiming to develop customised solutions to meet the needs of end-users. The co-creation workshops target to enhance the communication pathways between local civil protection authorities (CPAs) and citizens, in an effort to close the risk perception-action gap (RPAG). The participants in the workshops include a variety of stakeholders, as well as citizens, fostering the dialogue between the groups and supporting citizen participation in disaster risk management (DRM). The co-creation methodology in place implements co-design elements due to the integration of four ICT tools. Such ICT tools include web-based and mobile application technical solutions in different development stages, ranging from formulation and validation of concepts to pilot demonstrations. In total, seven different case studies are foreseen in RiskPACC. The workflow of the workshops is designed to be adaptive to every of the seven case study countries and their cultures’ particular needs. This work aims to provide an overview of the the preparation and the conduction of the workshops in which researchers and practitioners focused on mapping these different needs from the end users. The latter included first responders but also volunteers and citizens who actively participated in the co-creation workshops. The strategies to improve communication between CPAs and citizens themselves differ in the countries, and the modules of the co-creation methodology are adapted in response to such differences. Moreover, the project partners experienced how the structure of such workshops is perceived differently in the seven case studies. Therefore, the co-creation methodology itself is a design method underlying several iterations, which are eventually shaped by cross-cultural collaboration. For example, some case studies applied other modules according to the participatory group recruited. The participants were technical experts, teachers, citizens, first responders, or volunteers, among others. This work aspires to present the divergent approaches of the seven case studies implementing the co-creation methodology proposed, in response to different perceptions of the modules. An analysis of the adaptations and implications will also be provided to assess where the case studies’ objective of improving disaster resilience has been obtained.

Keywords: citizen participation, co-creation, disaster resilience, risk perception, ICT tools

Procedia PDF Downloads 77
1405 Evaluating the Potential of a Fast Growing Indian Marine Cyanobacterium by Reconstructing and Analysis of a Genome Scale Metabolic Model

Authors: Ruchi Pathania, Ahmad Ahmad, Shireesh Srivastava

Abstract:

Cyanobacteria is a promising microbe that can capture and convert atmospheric CO₂ and light into valuable industrial bio-products like biofuels, biodegradable plastics, etc. Among their most attractive traits are faster autotrophic growth, whole year cultivation using non-arable land, high photosynthetic activity, much greater biomass and productivity and easy for genetic manipulations. Cyanobacteria store carbon in the form of glycogen which can be hydrolyzed to release glucose and fermented to form bioethanol or other valuable products. Marine cyanobacterial species are especially attractive for countries with scarcity of freshwater. We recently identified a marine native cyanobacterium Synechococcus sp. BDU 130192 which has good growth rate and high level of polyglucans accumulation compared to Synechococcus PCC 7002. In this study, firstly we sequenced the whole genome and the sequences were annotated using the RAST server. Genome scale metabolic model (GSMM) was reconstructed through COBRA toolbox. GSMM is a computational representation of the metabolic reactions and metabolites of the target strain. GSMMs construction through the application of Flux Balance Analysis (FBA), which uses external nutrient uptake rates and estimate steady state intracellular and extracellular reaction fluxes, including maximization of cell growth. The model, which we have named isyn942, includes 942 reactions and 913 metabolites having 831 metabolic, 78 transport and 33 exchange reactions. The phylogenetic tree obtained by BLAST search revealed that the strain was a close relative of Synechococcus PCC 7002. The flux balance analysis (FBA) was applied on the model iSyn942 to predict the theoretical yields (mol product produced/mol CO₂ consumed) for native and non-native products like acetone, butanol, etc. under phototrophic condition by applying metabolic engineering strategies. The reported strain can be a viable strain for biotechnological applications, and the model will be helpful to researchers interested in understanding the metabolism as well as to design metabolic engineering strategies for enhanced production of various bioproducts.

Keywords: cyanobacteria, flux balance analysis, genome scale metabolic model, metabolic engineering

Procedia PDF Downloads 156
1404 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 68
1403 Developing Commitment to Change in Egyptian Modern Bureaucracies

Authors: Nada Basset

Abstract:

Purpose: To examine the nature of the civil service sector as an employer through identifying the likely ways to develop employees’ commitment towards change in the civil service sector. Design/Methodology/Approach: a qualitative research approach was followed. Data was collected via a triangulation of interviews, non-participant observation and archival documents analysis. Non-probability sampling took place with a case-study method applied on a sample of 33 civil servants working in the Egyptian Ministry of State for Administrative Development (MSAD) which is the civil service entity acting as the change agent responsible for managing the government administrative reforms plan in the civil service sector. All study participants were actually working in one of the change projects/programmes and had a minimum of 12 months of service in the civil service. Interviews were digitally recorded and transcribed in the form of MS-Word documents, and data transcripts were analyzed manually using MS-Excel worksheets and main research themes were developed and statistics drawn using those Excel worksheets. Findings: The results demonstrate that developing the civil servant’s commitment towards change may require a number of suggested solutions like (1) employee involvement and participation in the planning and implementation processes, (2) linking the employee support to change to some tangible rewards and incentives, (3) appointing some inspirational change leaders that should act as role models, and (4) as a last resort, enforcing employee’s commitment towards change by coercion and authoritarianism. Practical Implications: it is clear that civil servants’ lack of organizational commitment is not directly related to their level of commitment towards change. The research findings showed that civil servants’ commitment towards change can be raised and promoted by getting them involved in the planning and implementation processes, as this develops some sense of belongingness and ownership, thus there is a fair chance that low organizationally committed civil servants can develop high commitment towards change; given they are provided a favorable environment where they are invited to participate and get involved into the move of change. Originality/Value: the research addresses a relatively new area of ‘developing organizational commitment in modern bureaucracies’ by virtue of investigating the levels of civil servants’ commitment towards their jobs and/or organizations -on one hand- and suggesting different ways of developing their commitment towards administrative reform and change initiatives in the Egyptian civil service sector.

Keywords: change, commitment, Egypt, bureaucracy

Procedia PDF Downloads 478
1402 Maneuvering Modelling of a One-Degree-of-Freedom Articulated Vehicle: Modeling and Experimental Verification

Authors: Mauricio E. Cruz, Ilse Cervantes, Manuel J. Fabela

Abstract:

The evaluation of the maneuverability of road vehicles is generally carried out through the use of specialized computer programs due to the advantages they offer compared to the experimental method. These programs are based on purely geometric considerations of the characteristics of the vehicles, such as main dimensions, the location of the axles, and points of articulation, without considering parameters such as weight distribution and magnitude, tire properties, etc. In this paper, we address the problem of maneuverability in a semi-trailer truck to navigate urban streets, maneuvering yards, and parking lots, using the Ackerman principle to propose a kinematic model that, through geometric considerations, it is possible to determine the space necessary to maneuver safely. The model was experimentally validated by conducting maneuverability tests with an articulated vehicle. The measurements were made through a GPS that allows us to know the position, trajectory, and speed of the vehicle, an inertial motion unit (IMU) that allows measuring the accelerations and angular speeds in the semi-trailer, and an instrumented steering wheel that allows measuring the angle of rotation of the flywheel, the angular velocity and the torque applied to the flywheel. To obtain the steering angle of the tires, a parameterization of the complete travel of the steering wheel and its equivalent in the tires was carried out. For the tests, 3 different angles were selected, and 3 turns were made for each angle in both directions of rotation (left and right turn). The results showed that the proposed kinematic model achieved 95% accuracy for speeds below 5 km / h. The experiments revealed that that tighter maneuvers increased significantly the space required and that the vehicle maneuverability was limited by the size of the semi-trailer. The maneuverability was also tested as a function of the vehicle load and 3 different load levels we used: light, medium, and heavy. It was found that the internal turning radii also increased with the load, probably due to the changes in the tires' adhesion to the pavement since heavier loads had larger contact wheel-road surfaces. The load was found as an important factor affecting the precision of the model (up to 30%), and therefore I should be considered. The model obtained is expected to be used to improve maneuverability through a robust control system.

Keywords: articuled vehicle, experimental validation, kinematic model, maneuverability, semi-trailer truck

Procedia PDF Downloads 114
1401 Air Handling Units Power Consumption Using Generalized Additive Model for Anomaly Detection: A Case Study in a Singapore Campus

Authors: Ju Peng Poh, Jun Yu Charles Lee, Jonathan Chew Hoe Khoo

Abstract:

The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.

Keywords: anomaly detection, digital twin, generalised additive model, GAM, power consumption, supervised learning

Procedia PDF Downloads 149
1400 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 277
1399 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 209
1398 The Situation in Afghanistan as a Step Forward in Putting an End to Impunity

Authors: Jelena Radmanovic

Abstract:

On 5 March 2020, the International Criminal Court has decided to authorize the investigation into the crimes allegedly committed on the territory of Afghanistan after 1 May 2003. The said determination has raised several controversies, including the recently imposed sanctions by the United States, furthering the United States' long-standing rejection of the authority of the International Criminal Court. The purpose of this research is to address the said investigation in light of its importance for the prevention of impunity in the cases where the perpetrators are nationals of Non-Party States to the Rome Statute. Difficulties that the International Criminal Court has been facing, concerning the establishment of its jurisdiction in those instances where an involved state is not a Party to the Rome Statute, have become the most significant stumbling block undermining the importance, integrity, and influence of the Court. The Situation in Afghanistan raises even further concern, bearing in mind that the Prosecutor’s Request for authorization of an investigation pursuant to article 15 from 20 November 2017 has initially been rejected with the ‘interests of justice’ as an applied rationale. The first method used for the present research is the description of the actual events regarding the aforementioned decisions and the following reactions in the international community, while with the second method – the method of conceptual analysis, the research will address the decisions pertaining to the International Criminal Court’s jurisdiction and will attempt to address the mentioned Decision of 5 March 2020 as an example of good practice and a precedent that should be followed in all similar situations. The research will attempt parsing the reasons used by the International Criminal Court, giving rather greater attention to the latter decision that has authorized the investigation and the points raised by the officials of the United States. It is a find of this research that the International Criminal Court, together with other similar judicial instances (Nuremberg and Tokyo Tribunals, The International Criminal Tribunal for the former Yugoslavia, The International Criminal Tribunal for Rwanda), has presented the world with the possibility of non-impunity, attempting to prosecute those responsible for the gravest of crimes known to the humanity and has shown that such persons should not enjoy the benefits of their immunities, with its focus primarily on the victims of such crimes. Whilst it is an issue that will most certainly be addressed further in the future, with the situations that will be brought before the International Criminal Court, the present research will make an attempt at pointing to the significance of the situation in Afghanistan, the International Criminal Court as such and the international criminal justice as a whole, for the purpose of putting an end to impunity.

Keywords: Afghanistan, impunity, international criminal court, sanctions, United States

Procedia PDF Downloads 121
1397 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 61
1396 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors

Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro

Abstract:

Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.

Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method

Procedia PDF Downloads 237
1395 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 174
1394 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 267
1393 Understanding the Nature of Blood Pressure as Metabolic Syndrome Component in Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Pediatric overweight and obesity need attention because they may cause morbid obesity, which may develop metabolic syndrome (MetS). Criteria used for the definition of adult MetS cannot be applied for pediatric MetS. Dynamic physiological changes that occur during childhood and adolescence require the evaluation of each parameter based upon age intervals. The aim of this study is to investigate the distribution of blood pressure (BP) values within diverse pediatric age intervals and the possible use and clinical utility of a recently introduced Diagnostic Obesity Notation Model Assessment Tension (DONMA tense) Index derived from systolic BP (SBP) and diastolic BP (DBP) [SBP+DBP/200]. Such a formula may enable a more integrative picture for the assessment of pediatric obesity and MetS due to the use of both SBP and DBP. 554 children, whose ages were between 6-16 years participated in the study; the study population was divided into two groups based upon their ages. The first group comprises 280 cases aged 6-10 years (72-120 months), while those aged 10-16 years (121-192 months) constituted the second group. The values of SBP, DBP and the formula (SBP+DBP/200) covering both were evaluated. Each group was divided into seven subgroups with varying degrees of obesity and MetS criteria. Two clinical definitions of MetS have been described. These groups were MetS3 (children with three major components), and MetS2 (children with two major components). The other groups were morbid obese (MO), obese (OB), overweight (OW), normal (N) and underweight (UW). The children were included into the groups according to the age- and sex-based body mass index (BMI) percentile values tabulated by WHO. Data were evaluated by SPSS version 16 with p < 0.05 as the statistical significance degree. Tension index was evaluated in the groups above and below 10 years of age. This index differed significantly between N and MetS as well as OW and MetS groups (p = 0.001) above 120 months. However, below 120 months, significant differences existed between MetS3 and MetS2 (p = 0.003) as well as MetS3 and MO (p = 0.001). In comparison with the SBP and DBP values, tension index values have enabled more clear-cut separation between the groups. It has been detected that the tension index was capable of discriminating MetS3 from MetS2 in the group, which was composed of children aged 6-10 years. This was not possible in the older group of children. This index was more informative for the first group. This study also confirmed that 130 mm Hg and 85 mm Hg cut-off points for SBP and DBP, respectively, are too high for serving as MetS criteria in children because the mean value for tension index was calculated as 1.00 among MetS children. This finding has shown that much lower cut-off points must be set for SBP and DBP for the diagnosis of pediatric MetS, especially for children under-10 years of age. This index may be recommended to discriminate MO, MetS2 and MetS3 among the 6-10 years of age group, whose MetS diagnosis is problematic.

Keywords: blood pressure, children, index, metabolic syndrome, obesity

Procedia PDF Downloads 114
1392 Evaluating the Ability to Cycle in Cities Using Geographic Information Systems Tools: The Case Study of Greek Modern Cities

Authors: Christos Karolemeas, Avgi Vassi, Georgia Christodoulopoulou

Abstract:

Although the past decades, planning a cycle network became an inseparable part of all transportation plans, there is still a lot of room for improvement in the way planning is made, in order to create safe and direct cycling networks that gather the parameters that positively influence one's decision to cycle. The aim of this article is to study, evaluate and visualize the bikeability of cities. This term is often used as the 'the ability of a person to bike' but this study, however, adopts the term in the sense of bikeability as 'the ability of the urban landscape to be biked'. The methodology used included assessing cities' accessibility by cycling, based on international literature and corresponding walkability methods and the creation of a 'bikeability index'. Initially, a literature review was made to identify the factors that positively affect the use of bicycle infrastructure. Those factors were used in order to create the spatial index and quantitatively compare the city network. Finally, the bikeability index was applied in two case studies: two Greek municipalities that, although, they have similarities in terms of land uses, population density and traffic congestion, they are totally different in terms of geomorphology. The factors suggested by international literature were (a) safety, (b) directness, (c) comfort and (d) the quality of the urban environment. Those factors were quantified through the following parameters: slope, junction density, traffic density, traffic speed, natural environment, built environment, activities coverage, centrality and accessibility to public transport stations. Each road section was graded for the above-mentioned parameters, and the overall grade shows the level of bicycle accessibility (low, medium, high). Each parameter, as well as the overall accessibility levels, were analyzed and visualized through Geographic Information Systems. This paper presents the bikeability index, its' results, the problems that have arisen and the conclusions from its' implementation through Strengths-Weaknesses-Opportunities-Threats analysis. The purpose of this index is to make it easy for researchers, practitioners, politicians, and stakeholders to quantify, visualize and understand which parts of the urban fabric are suitable for cycling.

Keywords: accessibility, cycling, green spaces, spatial data, urban environment

Procedia PDF Downloads 104
1391 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen

Abstract:

After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.

Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity

Procedia PDF Downloads 102