Search results for: functional complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4417

Search results for: functional complexity

607 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization

Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman

Abstract:

In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.

Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization

Procedia PDF Downloads 230
606 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis

Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal

Abstract:

Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.

Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage

Procedia PDF Downloads 145
605 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories

Authors: Rene Hellmuth

Abstract:

The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 121
604 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation

Authors: Yang Cao

Abstract:

Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.

Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections

Procedia PDF Downloads 101
603 Evaluation of the Trauma System in a District Hospital Setting in Ireland

Authors: Ahmeda Ali, Mary Codd, Susan Brundage

Abstract:

Importance: This research focuses on devising and improving Health Service Executive (HSE) policy and legislation and therefore improving patient trauma care and outcomes in Ireland. Objectives: The study measures components of the Trauma System in the district hospital setting of the Cavan/Monaghan Hospital Group (CMHG), HSE, Ireland, and uses the collected data to identify the strengths and weaknesses of the CMHG Trauma System organisation, to include governance, injury data, prevention and quality improvement, scene care and facility-based care, and rehabilitation. The information will be made available to local policy makers to provide objective situational analysis to assist in future trauma service planning and service provision. Design, setting and participants: From 28 April to May 28, 2016 a cross-sectional survey using World Health Organisation (WHO) Trauma System Assessment Tool (TSAT) was conducted among healthcare professionals directly involved in the level III trauma system of CMHG. Main outcomes: Identification of the strengths and weaknesses of the Trauma System of CMHG. Results: The participants who reported inadequate funding for pre hospital (62.3%) and facility based trauma care at CMHG (52.5%) were high. Thirty four (55.7%) respondents reported that a national trauma registry (TARN) exists but electronic health records are still not used in trauma care. Twenty one respondents (34.4%) reported that there are system wide protocols for determining patient destination and adequate, comprehensive legislation governing the use of ambulances was enforced, however, there is a lack of a reliable advisory service. Over 40% of the respondents reported uncertainty of the injury prevention programmes available in Ireland; as well as the allocated government funding for injury and violence prevention. Conclusions: The results of this study contributed to a comprehensive assessment of the trauma system organisation. The major findings of the study identified three fundamental areas: the inadequate funding at CMHG, the QI techniques and corrective strategies used, and the unfamiliarity of existing prevention strategies. The findings direct the need for further research to guide future development of the trauma system at CMHG (and in Ireland as a whole) in order to maximise best practice and to improve functional and life outcomes.

Keywords: trauma, education, management, system

Procedia PDF Downloads 237
602 Co-Designing Health as a Social Community Centre: The Case of a 'Doctors of the World Project' in Brussels

Authors: Marco Ranzato, Maguelone Vignes

Abstract:

The co-design process recently run by the trans-disciplinary urban laboratory Metrolab Brussels for outlining the architecture of a future integrated health centre in Brussels (Belgium) has highlighted that a buffer place open to the local community is the appropriate cornerstone around which organizing a space where diverse professionals and patients are together. In the context of the migrants 'crisis' in Europe, the growing number of vulnerable people in Brussels and the increasing complexity of the health and welfare systems, the NGO Doctors of the World (DoW) has launched a project funded by The European Regional Development Fund, and aiming to create a new community centre combining social and health services in a poor but changing neighborhood of Brussels. Willing not to make a 'ghetto' of this new integrated service, the NGO looks at hosting different publics in order to make the poorest, marginal and most vulnerable people access to a regular kind of service. As a trans-disciplinary urban research group, Metrolab has been involved in the process of co-designing the architecture of the future centre with a set of various health professionals, social workers, and patients’ representatives. Metrolab drawn on the participants’ practice experiences and knowledge of hosting different kinds of publics and professions in a same structure in order to imagine what rooms should fit into the centre, what atmosphere they should convey, how should they be interrelated and organized, and, concurrently, how the building should fit into the urban frame of its neighborhood. The result is that, in order for an integrated health centre framed in the landscape of a disadvantaged neighborhood to function, it has to work as social community centre offering accessibility and conviviality to diverse social groups. This paper outlines the methodology that Metrolab used to design and conduct, in close collaboration with DoW, a series of 3 workshops. Through sketching and paper modeling, the methodology made participants talk about their experience by projecting them into a situation. It included a combination of individual and collective work in order to sharp participants’ eyes on architectural forms, explicit their thoughts and experience through inter-subjectivity and imagine solutions to the challenges they raised. Such a collaborative method encompasses several challenges about patients’ participation and representation, replicability of the conditions of success and the plurality of the research findings communication formats. This paper underlines how this participatory process has contributed to build knowledge on the few-documented topic of the architecture of community health centres. More importantly, the contribution builds on this participatory process to discuss the importance of adapting the architecture of the new integrated health centre to the changing population of Brussels and to the issues of its specific neighborhood.

Keywords: co-design, health, social innovation, urban lab

Procedia PDF Downloads 166
601 Resilience and Urban Transformation: A Review of Recent Interventions in Europe and Turkey

Authors: Bilge Ozel

Abstract:

Cities are high-complex living organisms and are subjects to continuous transformations produced by the stress that derives from changing conditions. Today the metropolises are seen like “development engines” of the countries and accordingly they become the centre of better living conditions that encourages demographic growth which constitutes the main reason of the changes. Indeed, the potential for economic advancement of the cities directly represents the economic status of their countries. The term of “resilience”, which sees the changes as natural processes and represents the flexibility and adaptability of the systems in the face of changing conditions, becomes a key concept for the development of urban transformation policies. The term of “resilience” derives from the Latin word ‘resilire’, which means ‘bounce’, ‘jump back’, refers to the ability of a system to withstand shocks and still maintain the basic characteristics. A resilient system does not only survive the potential risks and threats but also takes advantage of the positive outcomes of the perturbations and ensures adaptation to the new external conditions. When this understanding is taken into the urban context - or rather “urban resilience” - it delineates the capacity of cities to anticipate upcoming shocks and changes without undergoing major alterations in its functional, physical, socio-economic systems. Undoubtedly, the issue of coordinating the urban systems in a “resilient” form is a multidisciplinary and complex process as the cities are multi-layered and dynamic structures. The concept of “urban transformation” is first launched in Europe just after World War II. It has been applied through different methods such as renovation, revitalization, improvement and gentrification. These methods have been in continuous advancement by acquiring new meanings and trends over years. With the effects of neoliberal policies in the 1980s, the concept of urban transformation has been associated with economic objectives. Subsequently this understanding has been improved over time and had new orientations such as providing more social justice and environmental sustainability. The aim of this research is to identify the most applied urban transformation methods in Turkey and its main reasons of being selected. Moreover, investigating the lacking and limiting points of the urban transformation policies in the context of “urban resilience” in a comparative way with European interventions. The emblematic examples, which symbolize the breaking points of the recent evolution of urban transformation concepts in Europe and Turkey, are chosen and reviewed in a critical way.

Keywords: resilience, urban dynamics, urban resilience, urban transformation

Procedia PDF Downloads 262
600 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 217
599 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 116
598 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension

Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan

Abstract:

Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.

Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress

Procedia PDF Downloads 88
597 Effects of Soaking of Maize on the Viscosity of Masa and Tortilla Physical Properties at Different Nixtamalization Times

Authors: Jorge Martínez-Rodríguez, Esther Pérez-Carrillo, Diana Laura Anchondo Álvarez, Julia Lucía Leal Villarreal, Mariana Juárez Dominguez, Luisa Fernanda Torres Hernández, Daniela Salinas Morales, Erick Heredia-Olea

Abstract:

Maize tortillas are a staple food in Mexico which are mostly made by nixtamalization, which includes the cooking and steeping of maize kernels in alkaline conditions. The cooking step in nixtamalization demands a lot of energy and also generates nejayote, a water pollutant, at the end of the process. The aim of this study was to reduce the cooking time by adding a maize soaking step before nixtamalization while maintaining the quality properties of masa and tortillas. Maize kernels were soaked for 36 h to increase moisture up to 36%. Then, the effect of different cooking times (0, 5, 10, 15, 20, 20, 25, 30, 35, 45-control and 50 minutes) was evaluated on viscosity profile (RVA) of masa to select the treatments with a profile similar or equal to control. All treatments were left steeping overnight and had the same milling conditions. Treatments selected were 20- and 25-min cooking times which had similar values for pasting temperature (79.23°C and 80.23°C), Maximum Viscosity (105.88 Cp and 96.25 Cp) and Final Viscosity (188.5 Cp and 174 Cp) to those of 45 min-control (77.65 °C, 110.08 Cp, and 186.70 Cp, respectively). Afterward, tortillas were produced with the chosen treatments (20 and 25 min) and for control, then were analyzed for texture, damage starch, colorimetry, thickness, and average diameter. Colorimetric analysis of tortillas only showed significant differences for yellow/blue coordinates (b* parameter) at 20 min (0.885), unlike the 25-minute treatment (1.122). Luminosity (L*) and red/green coordinates (a*) showed no significant differences from treatments with respect control (69.912 and 1.072, respectively); however, 25 minutes was closer in both parameters (73.390 and 1.122) than 20 minutes (74.08 and 0.884). For the color difference, (E), the 25 min value (3.84) was the most similar to the control. However, for tortilla thickness and diameter, the 20-minute with 1.57 mm and 13.12 cm respectively was closer to those of the control (1.69 mm and 13.86 cm) although smaller to it. On the other hand, the 25 min treatment tortilla was smaller than both 20 min and control with 1.51 mm thickness and 13.590 cm diameter. According to texture analyses, there was no difference in terms of stretchability (8.803-10.308 gf) and distance for the break (95.70-126.46 mm) among all treatments. However, for the breaking point, all treatments (317.1 gf and 276.5 gf for 25 and 20- min treatment, respectively) were significantly different from the control tortilla (392.2 gf). Results suggest that by adding a soaking step and reducing cooking time by 25 minutes, masa and tortillas obtained had similar functional and textural properties to the traditional nixtamalization process.

Keywords: tortilla, nixtamalization, corn, lime cooking, RVA, colorimetry, texture, masa rheology

Procedia PDF Downloads 160
596 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites

Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze

Abstract:

Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.

Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering

Procedia PDF Downloads 367
595 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 55
594 Evolution of Rock-Cut Caves of Dhamnar at Dhamnar, MP

Authors: Abhishek Ranka

Abstract:

Rock-cut Architecture is a manifestation of human endurance in constructing magnificent structures by sculpting and cutting entire hills. Cave Architecture in India form an important part of rock-cut development and is among the most prolific examples of rock-cut architecture in the world. There are more than 1500 rock-cut caves in various regions of India. Among them mostly are located in western India, more particularly in the state of Maharashtra. Some of the rock-cut caves are located in the central region of India, which is presently known as Malawa (Madhya Pradesh). The region is dominated by the vidhyachal hill ranges toward the west, dotted with the coarse laterite rock. Dhamnar Caves have been excavated in the central region of Mandsaur Dist. With a combination of shared sacred faiths. The earliest rock-cut activity began in the north, in Bihar, where caves were excavated in the Barabar and the Nagarjuni hills during the Mauryan period (3rd century BCE). The rock-cut activity then shifts to the central part of India in Madhya Pradesh, where the caves at Dhamnar, Bagh, Udayagiri, Poldungar, etc. excavated between 3rdto 9ᵗʰ CE. The rock-cut excavation continued to flourish in Madhya Pradesh till 10ᵗʰ century CE, simultaneously with monolithic Hindu temples. Dhamnar caves fall into four architectural typologies: the Lena caves, Chaitya caves, Viharas & Lena-Chaityagriha caves. The Buddhist rock-cutting activity in central India is divisible into two phases. In the first phase (2ndBCE-3rd CE), the Buddha image is conspicuously absent. After a lapse of about three centuries, activity begins again, and the Buddha images this time are carved. The former group belongs to the Hinayana (Lesser Vehicle) phase and the latter to the Mahayana (Greater Vehicle). Dhamnar caves has an elaborate facades, pillar capitals, and many more creative sculptures in various postures. These caves were excavated against the background of invigorating trade activities and varied socio-religious or Socio Cultural contexts. These caves also highlights the wealthy and varied patronage provided by the dynasties of the past. This paper speaks about the appraisal of the rock cut mechanisms, design strategies, and approaches while promoting a scope for further research in conservation practices. Rock-cut sites, with their physical setting and various functional spaces as a sustainable habitat for centuries, has a heritage footprint with a researchquotient.

Keywords: rock-cut architecture, buddhism, hinduism, Iconography, and architectural typologies, Jainism

Procedia PDF Downloads 137
593 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness

Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach

Abstract:

The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.

Keywords: public health law, surveillance, policy, legal, data

Procedia PDF Downloads 135
592 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 66
591 Community Observatory for Territorial Information Control and Management

Authors: A. Olivi, P. Reyes Cabrera

Abstract:

Ageing and urbanization are two of the main trends that characterize the twenty-first century. Its trending is especially accelerated in the emerging countries of Asia and Latin America. Chile is one of the countries in the Latin American region, where the demographic transition to ageing is becoming increasingly visible. The challenges that the new demographic scenario poses to urban administrators call for searching innovative solutions to maximize the functional and psycho-social benefits derived from the relationship between older people and the environment in which they live. Although mobility is central to people's everyday practices and social relationships, it is not distributed equitably. On the contrary, it can be considered another factor of inequality in our cities. Older people are a particularly sensitive and vulnerable group to mobility. In this context, based on the ageing in place strategy and following the social innovation approach within a spatial context, the "Community Observatory of Territorial Information Control and Management" project aims at the collective search and validation of solutions for the satisfaction of mobility and accessibility specific needs of urban aged people. Specifically, the Observatory intends to: i) promote the direct participation of the aged population in order to generate relevant information on the territorial situation and the satisfaction of the mobility needs of this group; ii) co-create dynamic and efficient mechanisms for the reporting and updating of territorial information; iii) increase the capacity of the local administration to plan and manage solutions to environmental problems at the neighborhood scale. Based on a participatory mapping methodology and on the application of digital technology, the Observatory designed and developed, together with aged people, a crowdsourcing platform for smartphones, called DIMEapp, for reporting environmental problems affecting mobility and accessibility. DIMEapp has been tested at a prototype level in two neighborhoods of the city of Valparaiso. The results achieved in the testing phase have shown high potential in order to i) contribute to establishing coordination mechanisms with the local government and the local community; ii) improve a local governance system that guides and regulates the allocation of goods and services destined to solve those problems.

Keywords: accessibility, ageing, city, digital technology, local governance

Procedia PDF Downloads 120
590 Interventions for Children with Autism Using Interactive Technologies

Authors: Maria Hopkins, Sarah Koch, Fred Biasini

Abstract:

Autism is lifelong disorder that affects one out of every 110 Americans. The deficits that accompany Autism Spectrum Disorders (ASD), such as abnormal behaviors and social incompetence, often make it extremely difficult for these individuals to gain functional independence from caregivers. These long-term implications necessitate an immediate effort to improve social skills among children with an ASD. Any technology that could teach individuals with ASD necessary social skills would not only be invaluable for the individuals affected, but could also effect a massive saving to society in treatment programs. The overall purpose of the first study was to develop, implement, and evaluate an avatar tutor for social skills training in children with ASD. “Face Say” was developed as a colorful computer program that contains several different activities designed to teach children specific social skills, such as eye gaze, joint attention, and facial recognition. The children with ASD were asked to attend to FaceSay or a control painting computer game for six weeks. Children with ASD who received the training had an increase in emotion recognition, F(1, 48) = 23.04, p < 0.001 (adjusted Ms 8.70 and 6.79, respectively) compared to the control group. In addition, children who received the FaceSay training had higher post-test scored in facial recognition, F(1, 48) = 5.09, p < 0.05 (adjusted Ms: 38.11 and 33.37, respectively) compared to controls. The findings provide information about the benefits of computer-based training for children with ASD. Recent research suggests the value of also using socially assistive robots with children who have an ASD. Researchers investigating robots as tools for therapy in ASD have reported increased engagement, increased levels of attention, and novel social behaviors when robots are part of the social interaction. The overall goal of the second study was to develop a social robot designed to teach children specific social skills such as emotion recognition. The robot is approachable, with both an animal-like appearance and features of a human face (i.e., eyes, eyebrows, mouth). The feasibility of the robot is being investigated in children ages 7-12 to explore whether the social robot is capable of forming different facial expressions to accurately display emotions similar to those observed in the human face. The findings of this study will be used to create a potentially effective and cost efficient therapy for improving the cognitive-emotional skills of children with autism. Implications and study findings using the robot as an intervention tool will be discussed.

Keywords: autism, intervention, technology, emotions

Procedia PDF Downloads 368
589 Ganga Rejuvenation through Forestation and Conservation Measures in Riverscape

Authors: Ombir Singh

Abstract:

In spite of the religious and cultural pre-dominance of the river Ganga in the Indian ethos, fragmentation and degradation of the river continued down the ages. Recognizing the national concern on environmental degradation of the river and its basin, Ministry of Water Resources, River Development & Ganga Rejuvenation (MoWR,RD&GR), Government of India has initiated a number of pilot schemes for the rejuvenation of river Ganga under the ‘Namami Gange’ Programme. Considering the diversity, complexity, and intricacies of forest ecosystems and pivotal multiple functions performed by them and their inter-connectedness with highly dynamic river ecosystems, forestry interventions all along the river Ganga from its origin at Gaumukh, Uttarakhand to its mouth at Ganga Sagar, West Bengal has been planned by the ministry. For that Forest Research Institute (FRI) in collaboration with National Mission for Clean Ganga (NMCG) has prepared a Detailed Project Report (DPR) on Forestry Interventions for Ganga. The Institute has adopted an extensive consultative process at the national and state levels involving various stakeholders relevant in the context of river Ganga and employed a science-based methodology including use of remote sensing and GIS technologies for geo-spatial analysis, modeling and prioritization of sites for proposed forestation and conservation interventions. Four sets of field data formats were designed to obtain the field based information for forestry interventions, mainly plantations and conservation measures along the river course. In response, five stakeholder State Forest Departments had submitted more than 8,000 data sheets to the Institute. In order to analyze a voluminous field data received from five participating states, the Institute also developed a software to collate, analyze and generation of reports on proposed sites in Ganga basin. FRI has developed potential plantation and treatment models for the proposed forestry and other conservation measures in major three types of landscape components visualized in the Ganga riverscape. These are: (i) Natural, (ii) Agriculture, and (iii) Urban Landscapes. Suggested plantation models broadly varied for the Uttarakhand Himalayas and the Ganga Plains in five participating states. Besides extensive plantations in three type of landscapes within the riverscape, various conservation measures such as soil and water conservation, riparian wildlife management, wetland management, bioremediation and bio-filtration and supporting activities such as policy and law intervention, concurrent research, monitoring and evaluation, and mass awareness campaigns have been envisioned in the DPR. The DPR also incorporates the details of the implementation mechanism, budget provisioned for different components of the project besides allocation of budget state-wise to five implementing agencies, national partner organizations and the Nodal Ministry.

Keywords: conservation, Ganga, river, water, forestry interventions

Procedia PDF Downloads 142
588 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs

Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana

Abstract:

Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.

Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs

Procedia PDF Downloads 310
587 Safeguarding the Cloud: The Crucial Role of Technical Project Managers in Security Management for Cloud Environments

Authors: Samuel Owoade, Zainab Idowu, Idris Ajibade, Abel Uzoka

Abstract:

Cloud computing adoption continues to soar, with 83% of enterprise workloads estimated to be in the cloud by 2022. However, this rapid migration raises security concerns, needing strong security management solutions to safeguard sensitive data and essential applications. This paper investigates the critical role of technical project managers in orchestrating security management initiatives for cloud environments, evaluating their responsibilities, challenges, and best practices for assuring the resilience and integrity of cloud infrastructures. Drawing from a comprehensive review of industry reports and interviews with cloud security experts, this research highlights the multifaceted landscape of security management in cloud environments. Despite the rapid adoption of cloud services, only 25% of organizations have matured their cloud security practices, indicating a pressing need for effective management strategies. This paper proposes a strategy framework adapted to the demands of technical project managers, outlining the important components of effective cloud security management. Notably, 76% of firms identify misconfiguration as a major source of cloud security incidents, underlining the significance of proactive risk assessment and constant monitoring. Furthermore, the study emphasizes the importance of technical project managers in facilitating cross-functional collaboration, bridging the gap between cybersecurity professionals, cloud architects, compliance officers, and IT operations teams. With 68% of firms seeing difficulties integrating security policies into their cloud systems, effective communication and collaboration are critical to success. Case studies from industry leaders illustrate the practical use of security management projects in cloud settings. These examples demonstrate the importance of technical project managers in using their expertise to address obstacles and generate meaningful outcomes, with 92% of firms reporting improved security practices after implementing proactive security management tactics. In conclusion, this research underscores the critical role of technical project managers in safeguarding cloud environments against evolving threats. By embracing their role as guardians of the cloud realm, project managers can mitigate risks, optimize resource utilization, and uphold the trust and integrity of cloud infrastructures in an era of digital transformation.

Keywords: cloud security, security management, technical project management, cybersecurity, cloud infrastructure, risk management, compliance

Procedia PDF Downloads 43
586 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 174
585 Bioefficiency of Cinnamomum verum Loaded Niosomes and Its Microbicidal and Mosquito Larvicidal Activity against Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus

Authors: Aasaithambi Kalaiselvi, Michael Gabriel Paulraj, Ekambaram Nakkeeran

Abstract:

Emergences of mosquito vector-borne diseases are considered as a perpetual problem globally in tropical countries. The outbreak of several diseases such as chikungunya, zika virus infection and dengue fever has created a massive threat towards the living population. Frequent usage of synthetic insecticides like Dichloro Diphenyl Trichloroethane (DDT) eventually had its adverse harmful effects on humans as well as the environment. Since there are no perennial vaccines, prevention, treatment or drugs available for these pathogenic vectors, WHO is more concerned in eradicating their breeding sites effectively without any side effects on humans and environment by approaching plant-derived natural eco-friendly bio-insecticides. The aim of this study is to investigate the larvicidal potency of Cinnamomum verum essential oil (CEO) loaded niosomes. Cholesterol and surfactant variants of Span 20, 60 and 80 were used in synthesizing CEO loaded niosomes using Transmembrane pH gradient method. The synthesized CEO loaded niosomes were characterized by Zeta potential, particle size, Fourier Transform Infrared Spectroscopy (FT-IR), GC-MS and SEM analysis to evaluate charge, size, functional properties, the composition of secondary metabolites and morphology. The Z-average size of the formed niosomes was 1870.84 nm and had good stability with zeta potential -85.3 meV. The entrapment efficiency of the CEO loaded niosomes was determined by UV-Visible Spectrophotometry. The bio-potency of CEO loaded niosomes was treated and assessed against gram-positive (Bacillus subtilis) and gram-negative (Escherichia coli) bacteria and fungi (Aspergillus fumigatus and Candida albicans) at various concentrations. The larvicidal activity was evaluated against II to IV instar larvae of Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus at various concentrations for 24 h. The mortality rate of LC₅₀ and LC₉₀ values were calculated. The results exhibited that CEO loaded niosomes have greater efficiency against mosquito larvicidal activity. The results suggest that niosomes could be used in various applications of biotechnology and drug delivery systems with greater stability by altering the drug of interest.

Keywords: Cinnamomum verum, niosomes, entrapment efficiency, bactericidal and fungicidal, mosquito larvicidal activity

Procedia PDF Downloads 156
584 Using Genre Analysis to Teach Contract Negotiation Discourse Practices

Authors: Anthony Townley

Abstract:

Contract negotiation is fundamental to commercial law practice. For this study, genre and discourse analytical methodology was used to examine the legal negotiation of a Merger & Acquisition (M&A) deal undertaken by legal and business professionals in English across different jurisdictions in Europe. While some of the most delicate negotiations involved in this process were carried on face-to-face or over the telephone, these were generally progressed more systematically – and on the record – in the form of emails, email attachments, and as comments and amendments recorded in successive ‘marked-up’ versions of the contracts under negotiation. This large corpus of textual data was originally obtained by the author, in 2012, for the purpose of doctoral research. For this study, the analysis is particularly concerned with the use of emails and covering letters to exchange legal advice about the negotiations. These two genres help to stabilize and progress the negotiation process and account for negotiation activities. Swalesian analysis of functional Moves and Steps was able to identify structural similarities and differences between these text types and to identify certain salient discursive features within them. The analytical findings also indicate how particular linguistic strategies are more appropriately and more effectively associated with one legal genre rather than another. The concept of intertextuality is an important dimension of contract negotiation discourse and this study also examined how the discursive relationships between the different texts influence the way that texts are constructed. In terms of materials development, the research findings can contribute to more authentic English for Legal & Business Purposes pedagogies for students and novice lawyers and business professionals. The findings can first be used to design discursive maps that provide learners with a coherent account of the intertextual nature of the contract negotiation process. These discursive maps can then function as a framework in which to present detailed findings about the textual and structural features of the text types by applying the Swalesian genre analysis. Based on this acquired knowledge of the textual nature of contract negotiation, the authentic discourse materials can then be used to provide learners with practical opportunities to role-play negotiation activities and experience professional ways of thinking and using language in preparation for the written discourse challenges they will face in this important area of legal and business practice.

Keywords: English for legal and business purposes, discourse analysis, genre analysis, intertextuality, pedagogical materials

Procedia PDF Downloads 135
583 Architectural Robotics in Micro Living Spaces: An Approach to Enhancing Wellbeing

Authors: Timothy Antoniuk

Abstract:

This paper will demonstrate why the most successful and livable cities in the future will require multi-disciplinary designers to develop a deep understanding of peoples’ changing lifestyles, and why new generations of deeply integrated products, services and experiences need to be created. Disseminating research from the UNEP Creative Economy Reports and through a variety of other consumption and economic-based statistics, a compelling argument will be made that it is peoples’ living spaces that offer the easiest and most significant affordances for inducing positive changes to their wellbeing, and to a city’s economic and environmental prosperity. This idea, that leveraging happiness, wellbeing and prosperity through creating new concepts and typologies of ‘home’, puts people and their needs, wants, desires, aspirations and lifestyles at the beginning of the design process, not at the end, as so often occurs with current-day multi-unit housing construction. As an important part of the creative-reflective and statistical comparisons that are necessary for this on-going body of research and practice, Professor Antoniuk created the Micro Habitation Lab (mHabLab) in 2016. By focusing on testing the functional and economic feasibility of activating small spaces with different types of architectural robotics, a variety of movable, expandable and interactive objects have been hybridized and integrated into the architectural structure of the Lab. Allowing the team to test new ideas continually and accumulate thousands of points of feedback from everyday consumers, a series of on-going open houses is allowing the public-at-large to see, physically engage with, and give feedback on the items they find most and least valuable. This iterative approach of testing has exposed two key findings: Firstly, that there is a clear opportunity to improve the macro and micro functionality of small living spaces; and secondly, that allowing people to physically alter smaller elements of their living space lessens feelings of frustration and enhances feelings of pride and a deeper perception of “home”. Equally interesting to these findings is a grouping of new research questions that are being exposed which relate to: The duality of space; how people can be in two living spaces at one time; and how small living spaces is moving the Extended Home into the public realm.

Keywords: architectural robotics, extended home, interactivity, micro living spaces

Procedia PDF Downloads 160
582 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations

Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis

Abstract:

Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.

Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling

Procedia PDF Downloads 248
581 Selection of Suitable Reference Genes for Assessing Endurance Related Traits in a Native Pony Breed of Zanskar at High Altitude

Authors: Prince Vivek, Vijay K. Bharti, Manishi Mukesh, Ankita Sharma, Om Prakash Chaurasia, Bhuvnesh Kumar

Abstract:

High performance of endurance in equid requires adaptive changes involving physio-biochemical, and molecular responses in an attempt to regain homeostasis. We hypothesized that the identification of the suitable reference genes might be considered for assessing of endurance related traits in pony at high altitude and may ensure for individuals struggling to potent endurance trait in ponies at high altitude. A total of 12 mares of ponies, Zanskar breed, were divided into three groups, group-A (without load), group-B, (60 Kg) and group-C (80 Kg) on backpack loads were subjected to a load carry protocol, on a steep climb of 4 km uphill, and of gravel, uneven rocky surface track at an altitude of 3292 m to 3500 m (endpoint). Blood was collected before and immediately after the load carry on sodium heparin anticoagulant, and the peripheral blood mononuclear cell was separated for total RNA isolation and thereafter cDNA synthesis. Real time-PCR reactions were carried out to evaluate the mRNAs expression profile of a panel of putative internal control genes (ICGs), related to different functional classes, namely glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β₂ microglobulin (β₂M), β-actin (ACTB), ribosomal protein 18 (RS18), hypoxanthine-guanine phosophoribosyltransferase (HPRT), ubiquitin B (UBB), ribosomal protein L32 (RPL32), transferrin receptor protein (TFRC), succinate dehydrogenase complex subunit A (SDHA) for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of native pony’s. Three different algorithms, geNorm, NormFinder, and BestKeeper software, were used to evaluate the stability of reference genes. The result showed that GAPDH was best stable gene and stability value for the best combination of two genes was observed TFRC and β₂M. In conclusion, the geometric mean of GAPDH, TFRC and β₂M might be used for accurate normalization of transcriptional data for assessing endurance related traits in Zanskar ponies during load carrying.

Keywords: endurance exercise, ubiquitin B (UBB), β₂ microglobulin (β₂M), high altitude, Zanskar ponies, reference gene

Procedia PDF Downloads 126
580 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies

Authors: Manel Hammami, Gabriele Grandi

Abstract:

In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.

Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter

Procedia PDF Downloads 199
579 Anatomical Investigation of Superficial Fascia Relationships with the Skin and Underlying Tissue in the Greyhound Rump, Thigh, and Crus

Authors: Oday A. Al-Juhaishi, Sa`ad M. Ismail, Hung-Hsun Yen, Christina M. Murray, Helen M. S. Davies

Abstract:

The functional anatomy of the fascia in the greyhound is still poorly understood, and incompletely described. The basic knowledge of fascia stems mainly from anatomical, histological and ultrastructural analyses. In this study, twelve specimens of hindlimbs from six fresh greyhound cadavers (3 male, 3 female) were used to examine the topographical relationships of the superficial fascia with the skin and underlying tissue. The first incision was made along the dorsal midline from the level of the thoracolumbar junction caudally to the level of the mid sacrum. The second incision was begun at the level of the first incision and extended along the midline of the lateral aspect of the hindlimb distally, to just proximal to the tarsus, and, the skin margins carefully separated to observe connective tissue links between the skin and superficial fascia, attachment points of the fascia and the relationships of the fascia with blood vessels that supply the skin. A digital camera was used to record the anatomical features as they were revealed. The dissections identified fibrous septa connecting the skin with the superficial fascia and deep fascia in specific areas. The presence of the adipose tissue was found to be very rare within the superficial fascia in these specimens. On the extensor aspects of some joints, a fusion between the superficial fascia and deep fascia was observed. This fusion created a subcutaneous bursa in the following areas: a prepatellar bursa of the stifle, a tarsal bursa caudal to the calcaneus bone, and an ischiatic bursa caudal to the ischiatic tuberosity. The evaluation of blood vessels showed that the perforating vessels passed through fibrous septa in a perpendicular direction to supply the skin, with the largest branch noted in the gluteal area. The attachment points between the superficial fascia and skin were mainly found in the region of the flexor aspect of the joints, such as caudal to the stifle joint. The numerous fibrous septa between the superficial fascia and skin that have been identified in some areas, may create support for the blood vessels that penetrate fascia and into the skin, while allowing for movement between the tissue planes. The subcutaneous bursae between the skin and the superficial fascia where it is fused with the deep fascia may be useful to decrease friction between moving areas. The adhesion points may be related to the integrity and loading of the skin. The attachment points fix the skin and appear to divide the hindlimb into anatomical compartments.

Keywords: attachment points, fibrous septa, greyhound, subcutaneous bursa, superficial fascia

Procedia PDF Downloads 352
578 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 73