Search results for: international order
1180 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 2741179 Men of Congress in Today’s Brazil: Ethnographic Notes on Neoliberal Masculinities in Support of Bolsonaro
Authors: Joao Vicente Pereira Fernandez
Abstract:
In the context of a democratic crisis, a new wave of authoritarianism prompts domineering male figures to leadership posts worldwide. Although the gendered aspect of this phenomenon has been reasonably documented, recent studies have focused on high-level commanding posts, such as those of president and prime-minister, leaving other positions of political power with limited attention. This natural focus of investigation, however powerful, seems to have restricted our understanding of the phenomenon by precluding a more thorough inquiry of its gendered aspects and its consequences for political representation as a whole. Trying to fill this gap, in recent research, we examined the election results of Jair Bolsonaro’s party for the Legislative Branch in 2018. We found that the party's proportion of non-male representatives was on average, showing it provided reasonable access of women to the legislature in a comparative perspective. However, and perhaps more intuitively, we also found that the elected members of Bolsonaro’s party performed very gendered roles, which allowed us to draw the first lines of the representative profiles gathered around the new-right in Brazil. These results unveiled new horizons for further research, addressing topics that range from the role of women for the new-right on Brazilian institutional politics to the relations between these profiles of representatives, their agendas, and political and electoral strategies. This article aims to deepen the understanding of some of these profiles in order to lay the groundwork for the development of the second research agenda mentioned above. More specifically, it focuses on two out of the three profiles that were grasped predominantly, if not entirely, from masculine subjects during our last research, with the objective of portraying the masculinity standards mobilized and promoted by them. These profiles –the entrepreneur and the army man – were chosen to be developed due to their proximity to both liberal and authoritarian views, and, moreover, because they can possibly represent two facets of the new-right that were integrated in a certain way around Bolsonaro in 2018, but that can be reworked in the future. After a brief introduction of the literature on masculinity and politics in times of democratic crisis, we succinctly present the relevant results of our previous research and then describe these two profiles and their masculinities in detail. We adopt a combination of ethnography and discourse analysis, methods that allow us to make sense of the data we collected on our previous research as well as of the data gathered for this article: social media posts and interactions between the elected members that inspired these profiles and their supporters. Finally, we discuss our results, presenting our main argument on how these descriptions provide a further understanding of the gendered aspect of liberal authoritarianism, from where to better apprehend its political implications in Brazil.Keywords: Brazilian politics, gendered politics, masculinities, new-right
Procedia PDF Downloads 1211178 Quantitative Analysis of the High-Value Bioactive Components of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)
Authors: Lara Marie Pangan Lo, Soo Im Chung, Yao Cheng Zhang, Xingyue Jin, Mi Young Kang
Abstract:
Being the world’s most consumed grain crop, rice (Oryza sativa L.) demands’ have increase and this prompted the development of new rice cultivars with high bio-functional properties than the commonly used white rice. Ordinary rice variety is already known to be a potential source for a number of nutritional as well as bioactive compounds. To further enhance the rice’s nutritive values, germination is done in addition to making it more tasty and palatable when cooked. Pigmented rice, on the other hand, has become increasingly popular in the recent years for their greater antioxidant potential and other nutraceutical properties which can help alleviate the occurrence of the increasing incidence of metabolic diseases. Combining these two (2) parameters, this research study is sought to quantitatively determine the pre-germinated and germinated quantities of the major bioactive compounds of South Korea’s newly developed purplish pigmented rice grain cultivar Superjami (SJ) and red pigmented rice grain Superhongmi (SH) and compare them against the non-pigmented Normal Brown (NB) rice variety. Powdered rice grain cultivars were subjected to 72-hour germination period and the quantities of GABA, γ-oryzanol, ferulic acid, tocopherol and tocotrienol homologues were compared against their pre-germinated condition using γ- amino butyric acid (GABA) analysis and High Performance Liquid Chromatography (HPLC). Results revealed the effectiveness of germination in enhancing the bioactive components in all rice samples. GABA contents in germinated rice cultivars increased by more than 10-fold following the order: SJ >SH >NB. In addition, purple rice variety (SJ) has higher total γ-oryzanol and ferulic acid contents which increased by > 2-fold after germination followed by the red cultivar SH then the control, NB. Germinated varieties also possess higher total tocotrienol content than their pre-germinated state. As for the total tocopherol content, SJ has higher quantity, but the red-pigmented SH (0.16 mg/kg) is shown to have lower total tocopherol content than the control rice NB (0.86 mg/kg). However, all tocopherol and tocotrienol homologues were present only in small amounts ( < 3.0 mg/kg) in all pre-germinated and germinated samples. In general, all of the analyzed pigmented rice cultivars were found to possess higher bioactive compounds than the control NB rice variety. Also, regardless of their strain, germinated rice samples have higher bioactive compounds than their pre-germinated counterparts. This only shows the effectiveness of germinating rice in enhancing bioactive constituents. Overall, these results suggest the potential of the pigmented rice varieties as natural source of nutraceuticals in bio-functional food development.Keywords: bioactive compounds, germinated rice, superhongmi, superjami
Procedia PDF Downloads 4001177 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels
Authors: Lorenzo Petrucci
Abstract:
This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration
Procedia PDF Downloads 1751176 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 2901175 Effects of Oxytocin on Neural Response to Facial Emotion Recognition in Schizophrenia
Authors: Avyarthana Dey, Naren P. Rao, Arpitha Jacob, Chaitra V. Hiremath, Shivarama Varambally, Ganesan Venkatasubramanian, Rose Dawn Bharath, Bangalore N. Gangadhar
Abstract:
Objective: Impaired facial emotion recognition is widely reported in schizophrenia. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. However, its effect on facial emotion recognition deficits seen in schizophrenia is not well explored. In this study, we examined the effect of intranasal OXT on processing facial emotions and its neural correlates in patients with schizophrenia. Method: 12 male patients (age= 31.08±7.61 years, education= 14.50±2.20 years) participated in this single-blind, counterbalanced functional magnetic resonance imaging (fMRI) study. All participants underwent three fMRI scans; one at baseline, one each after single dose 24IU intranasal OXT and intranasal placebo. The order of administration of OXT and placebo were counterbalanced and subject was blind to the drug administered. Participants performed a facial emotion recognition task presented in a block design with six alternating blocks of faces and shapes. The faces depicted happy, angry or fearful emotions. The images were preprocessed and analyzed using SPM 12. First level contrasts comparing recognition of emotions and shapes were modelled at individual subject level. A group level analysis was performed using the contrasts generated at the first level to compare the effects of intranasal OXT and placebo. The results were thresholded at uncorrected p < 0.001 with a cluster size of 6 voxels. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. Results: Compared to placebo, intranasal OXT attenuated activity in inferior temporal, fusiform and parahippocampal gyri (BA 20), premotor cortex (BA 6), middle frontal gyrus (BA 10) and anterior cingulate gyrus (BA 24) and enhanced activity in the middle occipital gyrus (BA 18), inferior occipital gyrus (BA 19), and superior temporal gyrus (BA 22). There were no significant differences between the conditions on the accuracy scores of emotion recognition between baseline (77.3±18.38), oxytocin (82.63 ± 10.92) or Placebo (76.62 ± 22.67). Conclusion: Our results provide further evidence to the modulatory effect of oxytocin in patients with schizophrenia. Single dose oxytocin resulted in significant changes in activity of brain regions involved in emotion processing. Future studies need to examine the effectiveness of long-term treatment with OXT for emotion recognition deficits in patients with schizophrenia.Keywords: recognition, functional connectivity, oxytocin, schizophrenia, social cognition
Procedia PDF Downloads 2201174 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971173 CSPG4 Molecular Target in Canine Melanoma, Osteosarcoma and Mammary Tumors for Novel Therapeutic Strategies
Authors: Paola Modesto, Floriana Fruscione, Isabella Martini, Simona Perga, Federica Riccardo, Mariateresa Camerino, Davide Giacobino, Cecilia Gola, Luca Licenziato, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari
Abstract:
Canine and human melanoma, osteosarcoma (OSA), and mammary carcinomas are aggressive tumors with common characteristics making dogs a good model for comparative oncology. Novel therapeutic strategies against these tumors could be useful to both species. In humans, chondroitin sulphate proteoglycan 4 (CSPG4) is a marker involved in tumor progression and could be a candidate target for immunotherapy. The anti-CSPG4 DNA electrovaccination has shown to be an effective approach for canine malignant melanoma (CMM) [1]. An immunohistochemistry evaluation of CSPG4 expression in tumour tissue is generally performed prior to electrovaccination. To assess the possibility to perform a rapid molecular evaluation and in order to validate these spontaneous canine tumors as the model for human studies, we investigate the CSPG4 gene expression by RT qPCR in CMM, OSA, and canine mammary tumors (CMT). The total RNA was extracted from RNAlater stored tissue samples (CMM n=16; OSA n=13; CMT n=6; five paired normal tissues for CMM, five paired normal tissues for OSA and one paired normal tissue for CMT), retro-transcribed and then analyzed by duplex RT-qPCR using two different TaqMan assays for the target gene CSPG4 and the internal reference gene (RG) Ribosomal Protein S19 (RPS19). RPS19 was selected from a panel of 9 candidate RGs, according to NormFinder analysis following the protocol already described [2]. Relative expression was analyzed by CFX Maestro™ Software. Student t-test and ANOVA were performed (significance set at P<0.05). Results showed that gene expression of CSPG4 in OSA tissues is significantly increased by 3-4 folds when compared to controls. In CMT, gene expression of the target was increased from 1.5 to 19.9 folds. In melanoma, although an increasing trend was observed, no significant differences between the two groups were highlighted. Immunohistochemistry analysis of the two cancer types showed that the expression of CSPG4 within CMM is concentrated in isles of cells compared to OSA, where the distribution of positive cells is homogeneous. This evidence could explain the differences in gene expression results.CSPG4 immunohistochemistry evaluation in mammary carcinoma is in progress. The evidence of CSPG4 expression in a different type of canine tumors opens the way to the possibility of extending the CSPG4 immunotherapy marker in CMM, OSA, and CMT and may have an impact to translate this strategy modality to human oncology.Keywords: canine melanoma, canine mammary carcinomas, canine osteosarcoma, CSPG4, gene expression, immunotherapy
Procedia PDF Downloads 1741172 Didacticization of Code Switching as a Tool for Bilingual Education in Mali
Authors: Kadidiatou Toure
Abstract:
Mali has started experimentation of teaching the national languages at school through the convergent pedagogy in 1987. Then, it is in 1994 that it will become widespread with eleven of the thirteen former national languages used at primary school. The aim was to improve the Malian educational system because the use of French as the only medium of instruction was considered a contributing factor to the significant number of student dropouts and the high rate of repetition. The Convergent pedagogy highlights the knowledge acquired by children at home, their vision of the world and especially the knowledge they have of their mother tongue. That pedagogy requires the use of a specific medium only during classroom practices and teachers have been trained in this sense. The specific medium depends on the learning content, which sometimes is French, other times, it is the national language. Research has shown that bilingual learners do not only use the required medium in their learning activities, but they code switch. It is part of their learning processes. Currently, many scholars agree on the importance of CS in bilingual classes, and teachers have been told about the necessity of integrating it into their classroom practices. One of the challenges of the Malian bilingual education curriculum is the question of ‘effective languages management’. Theoretically, depending on the classrooms, an average have been established for each of the involved language. Following that, teachers make use of CS differently, sometimes, it favors the learners, other times, it contributes to the development of some linguistic weaknesses. The present research tries to fill that gap through a tentative model of didactization of CS, which simply means the practical management of the languages involved in the bilingual classrooms. It is to know how to use CS for effective learning. Moreover, the didactization of CS tends to sensitize the teachers about the functional role of CS so that they may overcome their own weaknesses. The overall goal of this research is to make code switching a real tool for bilingual education. The specific objectives are: to identify the types of CS used during classroom activities to present the functional role of CS for the teachers as well as the pupils. to develop a tentative model of code-switching, which will help the teachers in transitional classes of bilingual schools to recognize the appropriate moment for making use of code switching in their classrooms. The methodology adopted is a qualitative one. The study is based on recorded videos of teachers of 3rd year of primary school during their classroom activities and interviews with the teachers in order to confirm the functional role of CS in bilingual classes. The theoretical framework adopted is the typology of CS proposed by Poplack (1980) to identify the types of CS used. The study reveals that teachers need to be trained on the types of CS and the different functions they assume and on the consequences of inappropriate use of language alternation.Keywords: bilingual curriculum, code switching, didactization, national languages
Procedia PDF Downloads 711171 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 2181170 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1191169 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 1741168 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 1171167 Design, Simulation and Fabrication of Electro-Magnetic Pulse Welding Coil and Initial Experimentation
Authors: Bharatkumar Doshi
Abstract:
Electro-Magnetic Pulse Welding (EMPW) is a solid state welding process carried out at almost room temperature, in which joining is enabled by high impact velocity deformation. In this process, high voltage capacitor’s stored energy is discharged in an EM coil resulting in a damped, sinusoidal current with an amplitude of several hundred kiloamperes. Due to these transient magnetic fields of few tens of Tesla near the coil is generated. As the conductive (tube) part is positioned in this area, an opposing eddy current is induced in this part. Consequently, high Lorentz forces act on the part, leading to acceleration away from the coil. In case of a tube, it gets compressed under forming velocities of more than 300 meters per second. After passing the joining gap it collides with the second metallic joining rod, leading to the formation of a jet under appropriate collision conditions. Due to the prevailing high pressure, metallurgical bonding takes place. A characteristic feature is the wavy interface resulting from the heavy plastic deformations. In the process, the formation of intermetallic compounds which might deteriorate the weld strength can be avoided, even for metals with dissimilar thermal properties. In order to optimize the process parameters like current, voltage, inductance, coil dimensions, workpiece dimensions, air gap, impact velocity, effective plastic strain, shear stress acting in the welding zone/impact zone etc. are very critical and important to establish. These process parameters could be determined by simulation using Finite Element Methods (FEM) in which electromagnetic –structural couple field analysis is performed. The feasibility of welding could thus be investigated by varying the parameters in the simulation using COMSOL. Simulation results shall be applied in performing the preliminary experiments of welding the different alloy steel tubes and/or alloy steel to other materials. The single turn coil (S.S.304) with field shaper (copper) has been designed and manufactured. The preliminary experiments are performed using existing EMPW facility available Institute for Plasma Research, Gandhinagar, India. The experiments are performed at 22kV charged into 64µF capacitor bank and the energy is discharged into single turn EM coil. Welding of axi-symetric components such as aluminum tube and rod has been proven experimentally using EMPW techniques. In this paper EM coil design, manufacturing, Electromagnetic-structural FEM simulation of Magnetic Pulse Welding and preliminary experiment results is reported.Keywords: COMSOL, EMPW, FEM, Lorentz force
Procedia PDF Downloads 1841166 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 3661165 No-Par Shares Working in European LLCs
Authors: Agnieszka P. Regiec
Abstract:
Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test
Procedia PDF Downloads 1831164 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 1361163 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 541162 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals
Authors: Shirin Alabdulqader
Abstract:
This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals
Procedia PDF Downloads 1311161 Argos System: Improvements and Future of the Constellation
Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard
Abstract:
Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services
Procedia PDF Downloads 1811160 Blood Microbiome in Different Metabolic Types of Obesity
Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Dilyara R. Khusnutdinova, Dilyara R. Kamaldinova, Alexander V. Shestopalov
Abstract:
Background. Obese patients have unequal risks of metabolic disorders. It is accepted to distinguish between metabolically healthy obesity (MHO) and metabolically unhealthy obesity (MUHO). MUHO patients have a high risk of metabolic disorders, insulin resistance, and diabetes mellitus. Among the other things, the gut microbiota also contributes to the development of metabolic disorders in obesity. Obesity is accompanied by significant changes in the gut microbial community. In turn, bacterial translocation from the intestine is the basis for the blood microbiome formation. The aim was to study the features of the blood microbiome in patients with various metabolic types of obesity. Patients, materials, methods. The study included 116 healthy donors and 101 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=36) and MUHO (n=53). Quantitative and qualitative assessment of the blood microbiome was based on metagenomic analysis. Blood samples were used to isolate DNA and perform sequencing of the variable v3-v4 region of the 16S rRNA gene. Alpha diversity indices (Simpson index, Shannon index, Chao1 index, phylogenetic diversity, the number of observed operational taxonomic units) were calculated. Moreover, we compared taxa (phyla, classes, orders, and families) in terms of isolation frequency and the taxon share in the total bacterial DNA pool between different patient groups. Results. In patients with MHO, the characteristics of the alpha-diversity of the blood microbiome were like those of healthy donors. However, MUHO was associated with an increase in all diversity indices. The main phyla of the blood microbiome were Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria. Cyanobacteria, TM7, Thermi, Verrucomicrobia, Chloroflexi, Acidobacteria, Planctomycetes, Gemmatimonadetes, and Tenericutes were found to be less significant phyla of the blood microbiome. Phyla Acidobacteria, TM7, and Verrucomicrobia were more often isolated in blood samples of patients with MUHO compared with healthy donors. Obese patients had a decrease in some taxonomic ranks (Bacilli, Caulobacteraceae, Barnesiellaceae, Rikenellaceae, Williamsiaceae). These changes appear to be related to the increased diversity of the blood microbiome observed in obesity. An increase of Lachnospiraceae, Succinivibrionaceae, Prevotellaceae, and S24-7 was noted for MUHO patients, which, apparently, is explained by a magnification in intestinal permeability. Conclusion. Blood microbiome differs in obese patients and healthy donors at class, order, and family levels. Moreover, the nature of the changes is determined by the metabolic type of obesity. MUHO linked to increased diversity of the blood microbiome. This appears to be due to increased microbial translocation from the intestine and non-intestinal sources.Keywords: blood microbiome, blood bacterial DNA, obesity, metabolically healthy obesity, metabolically unhealthy obesity
Procedia PDF Downloads 1641159 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities
Authors: Hind Alshoubaki
Abstract:
The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’Keywords: refugee, refugee camp, temporary, Zaatari
Procedia PDF Downloads 1331158 Pixel Façade: An Idea for Programmable Building Skin
Authors: H. Jamili, S. Shakiba
Abstract:
Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials
Procedia PDF Downloads 881157 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 1031156 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)
Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham
Abstract:
The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects
Procedia PDF Downloads 451155 Efficacy of Pooled Sera in Comparison with Commercially Acquired Quality Control Sample for Internal Quality Control at the Nkwen District Hospital Laboratory
Authors: Diom Loreen Ndum, Omarine Njimanted
Abstract:
With increasing automation in clinical laboratories, the requirements for quality control materials have greatly increased in order to monitor daily performance. The constant use of commercial control material is not economically feasible for many developing countries because of non-availability or the high-cost of the materials. Therefore, preparation and use of in-house quality control serum will be a very cost-effective measure with respect to laboratory needs.The objective of this study was to determine the efficacy of in-house prepared pooled sera with respect to commercially acquired control sample for routine internal quality control at the Nkwen District Hospital Laboratory. This was an analytical study, serum was taken from leftover serum samples of 5 healthy adult blood donors at the blood bank of Nkwen District Hospital, which had been screened negative for human immunodeficiency virus (HIV), hepatitis C virus (HCV) and Hepatitis B antigens (HBsAg), and were pooled together in a sterile container. From the pooled sera, sixty aliquots of 150µL each were prepared. Forty aliquots of 150µL each of commercially acquired samples were prepared after reconstitution and stored in a deep freezer at − 20°C until it was required for analysis. This study started from the 9th June to 12th August 2022. Every day, alongside with commercial control sample, one aliquot of pooled sera was removed from the deep freezer and allowed to thaw before analyzed for the following parameters: blood urea, serum creatinine, aspartate aminotransferase (AST), alanine aminotransferase (ALT), potassium and sodium. After getting the first 20 values for each parameter of pooled sera, the mean, standard deviation and coefficient of variation were calculated, and a Levey-Jennings (L-J) chart established. The mean and standard deviation for commercially acquired control sample was provided by the manufacturer. The following results were observed; pooled sera had lesser standard deviation for creatinine, urea and AST than commercially acquired control samples. There was statistically significant difference (p<0.05) between the mean values of creatinine, urea and AST for in-house quality control when compared with commercial control. The coefficient of variation for the parameters for both commercial control and in-house control samples were less than 30%, which is an acceptable difference. The L-J charts revealed shifts and trends (warning signs), so troubleshooting and corrective measures were taken. In conclusion, in-house quality control sample prepared from pooled serum can be a good control sample for routine internal quality control.Keywords: internal quality control, levey-jennings chart, pooled sera, shifts, trends, westgard rules
Procedia PDF Downloads 771154 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine
Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland
Abstract:
The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.Keywords: force-velocity, leg-press, power-velocity, profiling, reliability
Procedia PDF Downloads 581153 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis
Authors: Christi Jackson, Chuka Onaga
Abstract:
Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.Keywords: drug-resistance, HIV, line probe assay, tuberculosis
Procedia PDF Downloads 1691152 Keeping Education Non-Confessional While Teaching Children about Religion
Authors: Tünde Puskás, Anita Andersson
Abstract:
This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.Keywords: non-confessional education, preschool, religion, tradition
Procedia PDF Downloads 1591151 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 297