Search results for: modeling strategy
5700 Nutriscience Project: A Web-Based Intervention to Improve Nutritional Literacy among Families and Educators of Pre-School Children
Authors: R. Barros, J. Azevedo, P. Padrão, M. Gregório, I. Pádua, C. Almeida, C. Rodrigues, P. Fontes, A. Coelho
Abstract:
Recent evidence shows a positive association between nutritional literacy and healthy eating. Traditional nutrition education strategies for childhood obesity prevention have shown weak effect. The Nutriscience project aims to create and evaluate an innovative and multidisciplinary strategy for promoting effective and accessible nutritional information to children, their families, and educators. Nutriscience is a one-year prospective follow-up evaluation study including pre-school children (3-5 y), who attend national schools’ network (29). The project is structured around a web-based intervention, using an on-line interactive platform, and focus on increasing fruit and vegetable consumption, and reducing sugar and salt intake. The platform acts as a social network where educational materials, games, and nutritional challenges are proposed in a gamification approach that promotes family and community social ties. A nutrition Massive Online Open Course is developed for educators, and a national healthy culinary contest will be promoted on TV channel. A parental self-reported questionnaire assessing sociodemographic and nutritional literacy (knowledge, attitudes, skills) is administered (baseline and end of the intervention). We expect that results on nutritional literacy from the presented strategy intervention will give us important information about the best practices for health intervention with kindergarten families. This intervention program using a digital interactive platform could be an educational tool easily adapted and disseminated for childhood obesity prevention.Keywords: childhood obesity, educational tool, nutritional literacy, web-based intervention
Procedia PDF Downloads 3345699 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3725698 Effect of Dissolved Oxygen Concentration on Iron Dissolution by Liquid Sodium
Authors: Sami Meddeb, M. L Giorgi, J. L. Courouau
Abstract:
This work presents the progress of studies aiming to guarantee the lifetime of 316L(N) steel in a sodium-cooled fast reactor by determining the elementary corrosion mechanism, which is akin to an accelerated dissolution by dissolved oxygen. The mechanism involving iron, the main element of steel, is particularly studied in detail, from the viewpoint of the data available in the literature, the modeling of the various mechanisms hypothesized. Experiments performed in the CORRONa facility at controlled temperature and dissolved oxygen content are used to test both literature data and hypotheses. Current tests, performed at various temperatures and oxygen content, focus on specifying the chemical reaction at play, determining its free enthalpy, as well as kinetics rate constants. Specific test configuration allows measuring the reaction kinetics and the chemical equilibrium state in the same test. In the current state of progress of these tests, the dissolution of iron accelerated by dissolved oxygen appears as directly related to a chemical complexation reaction of mixed iron-sodium oxide (Na-Fe-O), a compound that is soluble in the liquid sodium solution. Results obtained demonstrate the presence in the solution of this corrosion product, whose kinetics is the limiting step under the conditions of the test. This compound, the object of hypotheses dating back more than 50 years, is predominant in solution compared to atomic iron, presumably even for the low oxygen concentration, and cannot be neglected for the long-term corrosion modeling of any heat transfer system.Keywords: corrosion, sodium fast reactors, iron, oxygen
Procedia PDF Downloads 1805697 Optimization of Reaction Parameters' Influences on Production of Bio-Oil from Fast Pyrolysis of Oil Palm Empty Fruit Bunch Biomass in a Fluidized Bed Reactor
Authors: Chayanoot Sangwichien, Taweesak Reungpeerakul, Kyaw Thu
Abstract:
Oil palm mills in Southern Thailand produced a large amount of biomass solid wastes. Lignocellulose biomass is the main source for production of biofuel which can be combined or used as an alternative to fossil fuels. Biomass composed of three main constituents of cellulose, hemicellulose, and lignin. Thermochemical conversion process applied to produce biofuel from biomass. Pyrolysis of biomass is the best way to thermochemical conversion of biomass into pyrolytic products (bio-oil, gas, and char). Operating parameters play an important role to optimize the product yields from fast pyrolysis of biomass. This present work concerns with the modeling of reaction kinetics parameters for fast pyrolysis of empty fruit bunch in the fluidized bed reactor. A global kinetic model used to predict the product yields from fast pyrolysis of empty fruit bunch. The reaction temperature and vapor residence time parameters are mainly affected by product yields of EFB pyrolysis. The reaction temperature and vapor residence time parameters effects on empty fruit bunch pyrolysis are considered at the reaction temperature in the range of 450-500˚C and at a vapor residence time of 2 s, respectively. The optimum simulated bio-oil yield of 53 wt.% obtained at the reaction temperature and vapor residence time of 450˚C and 2 s, 500˚C and 1 s, respectively. The simulated data are in good agreement with the reported experimental data. These simulated data can be applied to the performance of experiment work for the fast pyrolysis of biomass.Keywords: kinetics, empty fruit bunch, fast pyrolysis, modeling
Procedia PDF Downloads 2185696 Improving Self-Administered Medication Adherence for Older Adults: A Systematic Review
Authors: Mathumalar Loganathan, Lina Syazana, Bryony Dean Franklin
Abstract:
Background: The therapeutic benefit of self-administered medication for long-term use is limited by an average 50% non-adherence rate. Patient forgetfulness is a common factor in unintentional non-adherence. With a growing ageing population, strategies to improve self-administration of medication adherence are essential. Our aim was to review systematically the effects of interventions to optimise self-administration of medication. Method: Database searched were MEDLINE, EMBASE, PsynINFO, CINAHL from 1980 to 31 October 2013. Search terms included were ‘self-administration’, ‘self-care’, ‘medication adherence’, and ‘intervention’. Two independent reviewers undertook screening and methodological quality assessment, using the Downs and Black rating scale. Results: The search strategy retrieved 6 studies that met the inclusion and exclusion criteria. Three intervention strategies were identified: self-administration medication programme (SAMP), nursing education and medication packaging (pill calendar). A nursing education programme focused on improving patients’ behavioural self-management of drug prescribing. This was the most studied area and three studies highlighting an improvement in self-administration of medication. Conclusion: Results are mixed and there is no one interventional strategy that has proved to be effective. Nevertheless, self-administration of medication programme seems to show most promise. A multi-faceted approach and clearer policy guideline are likely to be required to improve prescribing for these vulnerable patients. Mixed results were found for SAMP. Medication packaging (pill calendar) was evaluated in one study showing a significant improvement in self-administration of medication. A meta-analysis could not be performed due to heterogeneity in the outcome measures.Keywords: self-administered medication, intervention, prescribing, older patients
Procedia PDF Downloads 3245695 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations
Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan
Abstract:
The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations
Procedia PDF Downloads 2475694 Impact of Covid-19 on Digital Transformation
Authors: Tebogo Sethibe, Jabulile Mabuza
Abstract:
The COVID-19 pandemic has been commonly referred to as a ‘black swan event’; it has changed the world, from how people live, learn, work and socialise. It is believed that the pandemic has fast-tracked the adoption of technology in many organisations to ensure business continuity and business sustainability; broadly said, the pandemic has fast-tracked digital transformation (DT) in different organisations. This paper aims to study the impact of the COVID-19 pandemic on DT in organisations in South Africa by focusing on the changes in IT capabilities in the DT framework. The research design is qualitative. The data collection was through semi-structured interviews with information communication technology (ICT) leaders representing different organisations in South Africa. The data were analysed using the thematic analysis process. The results from the study show that, in terms of ICT in the organisation, the pandemic had a direct and positive impact on ICT strategy and ICT operations. In terms of IT capability transformation, the pandemic resulted in the optimisation and expansion of existing IT capabilities in the organisation and the building of new IT capabilities to meet emerging business needs. In terms of the focus of activities during the pandemic, there seems to be a split in organisations between the primary focus being on ‘digital IT’ or ‘traditional IT’. Overall, the findings of the study show that the pandemic had a positive and significant impact on DT in organisations. However, a definitive conclusion on this would require expanding the scope of the research to all the components of a comprehensive DT framework. This study is significant because it is one of the first studies to investigate the impact of the COVID-19 pandemic on organisations, on ICT in the organisation, on IT capability transformation and, to a greater extent, DT. The findings from the study show that in response to the pandemic, there is a need for: (i) agility in organisations; (ii) organisations to execute on their existing strategy; (iii) the future-proofing of IT capabilities; (iv) the adoption of a hybrid working model; and for (v) organisations to take risks and embrace new ideas.Keywords: digital transformation, COVID-19, bimodal-IT, digital transformation framework
Procedia PDF Downloads 1805693 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães
Abstract:
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart
Procedia PDF Downloads 1695692 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 1015691 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning
Authors: Guang Zou, Kian Banisoleiman, Arturo González
Abstract:
Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.Keywords: crack initiation, fatigue reliability, inspection planning, welded joints
Procedia PDF Downloads 3535690 The Impact of Restricting Product Availability on the Purchasing of Lower Sugar Biscuits in UK Convenience Stores
Authors: Hannah S. Waldron
Abstract:
Background: The government has proposed sugar reduction targets in an effort to tackle childhood obesity, focussing on those of low socioeconomic status (SES). Supermarkets are a key location for reducing the amount of sugar purchased, but success so far in this environment has been limited. Building on previous research, this study will assess the impact of restricting the availability of higher sugar biscuits as a strategy to encourage lower sugar biscuit purchasing, and whether the effects vary by customer SES. Method: 14 supermarket convenience stores were divided between control (n=7) and intervention (n=7) groups. In the intervention stores, biscuits with sugar above the government’s target (26.2g/100g) were removed from sale and replaced with lower sugar ( < 26.2g sugar/100g) alternatives. Sales and customer demographic information were collected using loyalty card data and point-of-sale transaction data for 8-weeks pre and post the intervention for lower sugar biscuits, total biscuits, alternative higher sugar products, and all products. Results were analysed using three-way and two-way mixed ANOVAs. Results: The intervention resulted in a significant increase in lower sugar biscuit purchasing (p < 0.001) and a significant decline in overall biscuit sales (p < 0.001) between the time periods compared to control stores. Sales of higher sugar products and all products increased significantly between the two time periods in both the intervention and control stores (p < 0.05). SES showed no significant effect on any of the reported outcomes (p > 0.05). Conclusion: Restricting the availability of higher sugar products may be a successful strategy for encouraging lower sugar purchasing across all SES groups. However, larger-scale interventions are required in additional categories to assess the long term implications for both consumers and retailers.Keywords: biscuits, nudging, sugar, supermarket
Procedia PDF Downloads 1055689 Optimization of Municipal Solid Waste Management in Peshawar Using Mathematical Modelling and GIS with Focus on Incineration
Authors: Usman Jilani, Ibad Khurram, Irshad Hussain
Abstract:
Environmentally sustainable waste management is a challenging task as it involves multiple and diverse economic, environmental, technical and regulatory issues. Municipal Solid Waste Management (MSWM) is more challenging in developing countries like Pakistan due to lack of awareness, technology and human resources, insufficient funding, inefficient collection and transport mechanism resulting in the lack of a comprehensive waste management system. This work presents an overview of current MSWM practices in Peshawar, the provincial capital of Khyber Pakhtunkhwa, Pakistan and proposes a better and sustainable integrated solid waste management system with incineration (Waste to Energy) option. The diverted waste would otherwise generate revenue; minimize land fill requirement and negative impact on the environment. The proposed optimized solution utilizing scientific techniques (like mathematical modeling, optimization algorithms and GIS) as decision support tools enhances the technical & institutional efficiency leading towards a more sustainable waste management system through incorporating: - Improved collection mechanisms through optimized transportation / routing and, - Resource recovery through incineration and selection of most feasible sites for transfer stations, landfills and incineration plant. These proposed methods shift the linear waste management system towards a cyclic system and can also be used as a decision support tool by the WSSP (Water and Sanitation Services Peshawar), agency responsible for the MSWM in Peshawar.Keywords: municipal solid waste management, incineration, mathematical modeling, optimization, GIS, Peshawar
Procedia PDF Downloads 3775688 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 4555687 Reducing Antimicrobial Resistance Using Biodegradable Polymer Composites of Mof-5 for Efficient and Sustained Delivery of Cephalexin and Metronidazole
Authors: Anoff Anim, Lila Mahmound, Maria Katsikogianni, Sanjit Nayak
Abstract:
Sustained and controlled delivery of antimicrobial drugs have been largely studied recently using metal organic frameworks (MOFs)and different polymers. However, much attention has not been given to combining both MOFs and biodegradable polymers which would be a good strategy in providing a sustained gradual release of the drugs. Herein, we report a comparative study of the sustained and controlled release of widely used antibacterial drugs, cephalexin and metronidazole, from zinc-based MOF-5 incorporated in biodegradable polycaprolactone (PCL) and poly-lactic glycolic acid (PLGA) membranes. Cephalexin and metronidazole were separately incorporated in MOF-5 post-synthetically, followed by their integration into biodegradable PLGA and PCL membranes. The pristine MOF-5 and the loaded MOFs were thoroughly characterized by FT-IR, SEM, TGA and PXRD. Drug release studies were carried out to assess the release rate of the drugs in PBS and distilled water for up to 48 hours using UV-Vis Spectroscopy. Four bacterial strains from both the Gram-positive and Gram-negative types, Staphylococus aureus, Staphylococuss epidermidis, Escherichia coli, Acinetobacter baumanii, were tested against the pristine MOF, pure drugs, loaded MOFs and the drug-loaded MOF-polymer composites. Metronidazole-loaded MOF-5 composite of PLGA (PLGA-Met@MOF-5) was found to show highest efficiency to inhibit the growth of S. epidermidis compared to the other bacteria strains while maintaining a sustained minimum inhibitory concentration (MIC). This study demonstrates that the combination of biodegradable MOF-polymer composites can provide an efficient platform for sustained and controlled release of antimicrobial drugs, and can be a potential strategy to integrate them in biomedical devices.Keywords: antimicrobial resistance, biodegradable polymers, cephalexin, drug release metronidazole, MOF-5, PCL, PLGA
Procedia PDF Downloads 855686 Strategies and Approaches for Curriculum Development and Training of Faculty in Cybersecurity Education
Authors: Lucy Tsado
Abstract:
As cybercrime and cyberattacks continue to increase, the need to respond will follow suit. When cybercrimes occur, the duty to respond sometimes falls on law enforcement. However, criminal justice students are not taught concepts in cybersecurity and digital forensics. There is, therefore, an urgent need for many more institutions to begin teaching cybersecurity and related courses to social science students especially criminal justice students. However, many faculty in universities, colleges, and high schools are not equipped to teach these courses or do not have the knowledge and resources to teach important concepts in cybersecurity or digital forensics to criminal justice students. This research intends to develop curricula and training programs to equip faculty with the skills to meet this need. There is a current call to involve non-technical fields to fill the cybersecurity skills gap, according to experts. There is a general belief among non-technical fields that cybersecurity education is only attainable within computer science and technologically oriented fields. As seen from current calls, this is not entirely the case. Transitioning into the field is possible through curriculum development, training, certifications, internships and apprenticeships, and competitions. There is a need to identify how a cybersecurity eco-system can be created at a university to encourage/start programs that will lead to an interest in cybersecurity education as well as attract potential students. A short-term strategy can address this problem through curricula development, while a long-term strategy will address developing training faculty to teach cybersecurity and digital forensics. Therefore this research project addresses this overall problem in two parts, through curricula development for the criminal justice discipline; and training of faculty in criminal justice to teaching the important concepts of cybersecurity and digital forensics.Keywords: cybersecurity education, criminal justice, curricula development, nontechnical cybersecurity, cybersecurity, digital forensics
Procedia PDF Downloads 1055685 A Comparative Analysis of (De)legitimation Strategies in Selected African Inaugural Speeches
Authors: Lily Chimuanya, Ehioghae Esther
Abstract:
Language, a versatile and sophisticated tool, is fundamentally sacrosanct to mankind especially within the realm of politics. In this dynamic world, political leaders adroitly use language to engage in a strategic show aimed at manipulating or mechanising the opinion of discerning people. This nuanced synergy is marked by different rhetorical strategies, meticulously synced with contextual factors ranging from cultural, ideological, and political to achieve multifaceted persuasive objectives. This study investigates the (de)legitimation strategies inherent in African presidential inaugural speeches, as African leaders not only state their policy agenda through inaugural speeches but also subtly indulge in a dance of legitimation and delegitimation, performing a twofold objective of strengthening the credibility of their administration and, at times, undermining the performance of the past administration. Drawing insights from two different legitimation models and a dataset of 4 African presidential inaugural speeches obtained from authentic websites, the study describes the roles of authorisation, rationalisation, moral evaluation, altruism, and mythopoesis in unmasking the structure of political discourse. The analysis takes a mixed-method approach to unpack the (de)legitimation strategy embedded in the carefully chosen speeches. The focus extends beyond a superficial exploration and delves into the linguistic elements that form the basis of presidential discourse. In conclusion, this examination goes beyond the nuanced landscape of language as a potent tool in politics, with each strategy contributing to the overall rhetorical impact and shaping the narrative. From this perspective, the study argues that presidential inaugural speeches are not only linguistic exercises but also viable weapons that influence perceptions and legitimise authority.Keywords: CDA, legitimation, inaugural speeches, delegitmation
Procedia PDF Downloads 705684 Numerical Modeling of Geogrid Reinforced Soil Bed under Strip Footings Using Finite Element Analysis
Authors: Ahmed M. Gamal, Adel M. Belal, S. A. Elsoud
Abstract:
This article aims to study the effect of reinforcement inclusions (geogrids) on the sand dunes bearing capacity under strip footings. In this research experimental physical model was carried out to study the effect of the first geogrid reinforcement depth (u/B), the spacing between the reinforcement (h/B) and its extension relative to the footing length (L/B) on the mobilized bearing capacity. This paper presents the numerical modeling using the commercial finite element package (PLAXIS version 8.2) to simulate the laboratory physical model, studying the same parameters previously handled in the experimental work (u/B, L/B & h/B) for the purpose of validation. In this study the soil, the geogrid, the interface element and the boundary condition are discussed with a set of finite element results and the validation. Then the validated FEM used for studying real material and dimensions of strip foundation. Based on the experimental and numerical investigation results, a significant increase in the bearing capacity of footings has occurred due to an appropriate location of the inclusions in sand. The optimum embedment depth of the first reinforcement layer (u/B) is equal to 0.25. The optimum spacing between each successive reinforcement layer (h/B) is equal to 0.75 B. The optimum Length of the reinforcement layer (L/B) is equal to 7.5 B. The optimum number of reinforcement is equal to 4 layers. The study showed a directly proportional relation between the number of reinforcement layer and the Bearing Capacity Ratio BCR, and an inversely proportional relation between the footing width and the BCR.Keywords: reinforced soil, geogrid, sand dunes, bearing capacity
Procedia PDF Downloads 4235683 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 1655682 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model
Authors: K. G. R. M. Jayathilake, S. Rudra
Abstract:
Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood
Procedia PDF Downloads 1265681 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units
Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov
Abstract:
The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis
Procedia PDF Downloads 2745680 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6
Authors: Levent Dumenci, Laura A. Siminoff
Abstract:
Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement
Procedia PDF Downloads 1795679 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 1015678 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 3945677 Parent-Child Communication: Community Based HIV/AIDS Response Strategy among Young Persons
Authors: Vicent Lwanga
Abstract:
Issue: Communication between parent and child is important and necessary. Poor parenting and lack of openness and communication between parents and their children contribute to the increasing rate of HIV infection among young persons between the ages of 10-25. The young person, when left on their own are at the risk of misinformation from peers and from other sources. Description: Parent-Child Communication (PCC) was designed as a key component of a community-based HIV and AIDS intervention focused on young persons by Elderly Widows Orphans Family Support Organisation. Findings from the preliminary community-level process indicated that the lack of parent-child communication militates against young persons adopting and maintaining healthier sexual behaviors. An integrated youth strategy consisting of youth Peer Education/Facilitation and PCC was used to bridge this gap. The process involved an interactive parent-child forum, which allowed parents and children to meet and have open and frank discussions on the needs of young persons and the role of parents. This forum addressed all emerging issues from all parties and created better cordiality amongst them. Lessons Learnt: When young people feel unconnected to their parents, family, or home, they may become involved in activities that put their health at risk. Equally, when parents affirm the value of their children through open interaction, children are more likely to develop positive and healthy attitudes about themselves. Creating the opportunity for this interactive forum is paramount in any intervention program focused on young persons. Conclusion: HIV and AIDS-related programmes, especially those focusing on youth, should have PCC as an integral, essential component. Parents should be vehicles for information dissemination and need to be equipped with the capacity and skills to take on the onerous task of talking sexual reproductive health and sexuality with their children and wards.Keywords: aids, communication, HIV, youth
Procedia PDF Downloads 1245676 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)
Procedia PDF Downloads 2225675 Analysis of Key Factors Influencing Muslim Women’s Buying Intentions of Clothes: A Study of UK’s Ethnic Minorities and Modest Fashion Industry
Authors: Nargis Ali
Abstract:
Since the modest fashion market is growing in the UK, there is still little understanding and more concerns found among researchers and marketers about Muslim consumers. Therefore, the present study is designed to explore critical factors influencing Muslim women’s intention to purchase clothing and to identify the differences in the purchase intention of ethnic minority groups in the UK. The conceptual framework is designed using the theory of planned behavior and social identity theory. In order to satisfy the research objectives, a structured online questionnaire was published on Facebook from 20 November to 21 March. As a result, 1087 usable questionnaires were received and used to assess the proposed model fit through structural equation modeling. Results revealed that social media does influence the purchase intention of Muslim women. Muslim women search for stylish clothes that provide comfort during summer while they prefer soft and subdued colors. Furthermore, religious knowledge and religious practice, and fashion uniqueness strongly influence their purchase intention, while hybrid identity is negatively related to the purchase intention of Muslim women. This research contributes to the literature linked to Muslim consumers at a time when the UK's large retailers were seeking to attract Muslim consumers through modestly designed outfits. Besides, it will be helpful to formulate or revise product and marketing strategies according to UK’s Muslim women’s tastes and needs.Keywords: fashion uniqueness, hybrid identity, religiosity, social media, social identity theory, structural equation modeling, theory of planned behavior
Procedia PDF Downloads 2275674 Numerical Investigation of Pressure Drop in Core Annular Horizontal Pipe Flow
Authors: John Abish, Bibin John
Abstract:
Liquid-liquid flow in horizontal pipe is investigated in order to reveal the flow patterns arising from the co-existed flow of oil and water. The main focus of the study is to identify the feasibility of reducing the pumping power requirements of petroleum transportation lines by having an annular flow of water around the thick oil core. This idea makes oil transportation cheaper and easier. The present study uses computational fluid dynamics techniques to model oil-water flows with liquids of similar density and varying viscosity. The simulation of the flow is conducted using commercial package Ansys Fluent. Flow domain modeling and grid generation accomplished through ICEM CFD. The horizontal pipe is modeled with two different inlets and meshed with O-Grid mesh. The standard k-ε turbulence scheme along with the volume of fluid (VOF) multiphase modeling method is used to simulate the oil-water flow. Transient flow simulations carried out for a total period of 30s showed significant reduction in pressure drop while employing core annular flow concept. This study also reveals the effect of viscosity ratio, mass flow rates of individual fluids and ration of superficial velocities on the pressure drop across the pipe length. Contours of velocity and volume fractions are employed along with pressure predictions to assess the effectiveness of this proposed concept quantitatively as well as qualitatively. The outcome of the present study is found to be very relevant for the petrochemical industries.Keywords: computational fluid dynamics, core-annular flows, frictional flow resistance, oil transportation, pressure drop
Procedia PDF Downloads 4085673 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1575672 Development of a Methodology for Surgery Planning and Control: A Management Approach to Handle the Conflict of High Utilization and Low Overtime
Authors: Timo Miebach, Kirsten Hoeper, Carolin Felix
Abstract:
In times of competitive pressures and demographic change, hospitals have to reconsider their strategies as a company. Due to the fact, that operations are one of the main income and one of the primary cost drivers otherwise, a process-oriented approach and an efficient use of resources seems to be the right way for getting a consistent market position. Thus, the efficient operation room occupancy planning is an important cause variable for the success and continued the existence of these institutions. A high utilization of resources is essential. This means a very high, but nevertheless sensible capacity-oriented utilization of working systems that can be realized by avoiding downtimes and a thoughtful occupancy planning. This engineering approach should help hospitals to reach her break-even point. Firstly, the aim is to establish a strategy point, which can be used for the generation of a planned throughput time. Secondly, the operation planning and control should be facilitated and implemented accurately by the generation of time modules. More than 100,000 data records of the Hannover Medical School were analyzed. The data records contain information about the type of conducted operation, the duration of the individual process steps, and all other organizational-specific data such as an operating room. Based on the aforementioned data base, a generally valid model was developed by an analysis to define a strategy point which takes the conflict of capacity utilization and low overtime into account. Furthermore, time modules were generated in this work, which allows a simplified and flexible operation planning and control for the operation manager. By the time modules, it is possible to reduce a high average value of the idle times of the operation rooms. Furthermore, the potential is used to minimize the idle time spread.Keywords: capacity, operating room, surgery planning and control, utilization
Procedia PDF Downloads 2535671 N-Heterocyclic Carbene Based Dearomatized Iridium Complex as an Efficient Catalyst towards Carbon-Carbon Bond Formation via Hydrogen Borrowing Strategy
Authors: Mandeep Kaur, Jitendra K. Bera
Abstract:
The search for atom-economical and green synthetic methods for the synthesis of functionalized molecules has attracted much attention. Metal ligand cooperation (MLC) plays a pivotal role in organometallic catalysis to activate C−H, H−H, O−H, N−H and B−H bonds through reversible bond breaking and bond making process. Towards this goal, a bifunctional N─heterocyclic carbene (NHC) based pyridyl-functionalized amide ligand precursor, and corresponding dearomatized iridium complex was synthesized. The NMR and UV/Vis acid titration study have been done to prove the proton response nature of the iridium complex. Further, the dearomatized iridium complex explored as a catalyst on the platform of MLC via dearomatzation/aromatization mode of action towards atom economical α and β─alkylation of ketones and secondary alcohols by using primary alcohols through hydrogen borrowing methodology. The key features of the catalysis are high turnover frequency (TOF) values, low catalyst loading, low base loading and no waste product. The greener syntheses of quinoline, lactone derivatives and selective alkylation of drug molecules like pregnenolone and testosterone were also achieved successfully. Another structurally similar iridium complex was also synthesized with modified ligand precursor where a pendant amide unit was absent. The inactivity of this analogue iridium complex towards catalysis authenticated the participation of proton responsive imido sidearm of the ligand to accelerate the catalytic reaction. The mechanistic investigation through control experiments, NMR and deuterated labeling study, authenticate the borrowing hydrogen strategy.Keywords: C-C bond formation, hydrogen borrowing, metal ligand cooperation (MLC), n-heterocyclic carbene
Procedia PDF Downloads 182