Search results for: Mevlevi order
1317 A Study of Bilingual Development of a Mandarin and English Bilingual Preschool Child from China to Australia
Abstract:
This project aims to trace the developmental patterns of a child's Mandarin and English from China to Australia from age 3; 03 till 5; 06. In childhood bilingual studies, there is an assumption that age 3 is the dividing line between simultaneous bilinguals and sequential bilinguals. Determining similarities and differences between Bilingual First Language Acquisition, Early Second Language Acquisition, and Second Language Acquisition is of great theoretical significance. Studies on Bilingual First Language Acquisition, hereafter, BFLA in the past three decades have shown that the grammatical development of bilingual children progresses through the same developmental trajectories as their monolingual counterparts. Cross-linguistic interaction does not show changes of the basic grammatical knowledge, even in the weaker language. While BFLA studies show consistent results under the conditions of adequate input and meaningful interactional context, the research findings of Early Second Language Acquisition (ESLA) have demonstrated that this cohort proceeds their early English differently from both BFLA and SLA. The different development could be attributed to the age of migration, input pattern, and their Environmental Languages (Lε). In the meantime, the dynamic relationship between the two languages is an issue to invite further attention. The present study attempts to fill this gap. The child in this case study started acquiring L1 Mandarin from birth in China, where the environmental language (Lε) coincided with L1 Mandarin. When she migrated to Australia at 3;06, where the environmental language (Lε) was L2 English, her Mandarin exposure was reduced. On the other hand, she received limited English input starting from 1; 02 in China, where the environmental language (Lε) was L1 Mandarin, a non-English environment. When she relocated to Australia at 3; 06, where the environmental language (Lε) coincided with L2 English, her English exposure significantly increased. The child’s linguistic profile provides an opportunity to explore: (1) What does the child’s English developmental route look like? (2) What does the L1 Mandarin developmental pattern look like in different environmental languages? (3) How do input and environmental language interact in shaping the bilingual child’s linguistic repertoire? In order to answer these questions, two linguistic areas are selected as the focus of the investigation, namely, subject realization and wh-questions. The chosen areas are contrastive in structure but perform the same semantic functions in the two linguistically distant languages and can serve as an ideal testing ground for exploring the developmental path in the two languages. The longitudinal case study adopts a combined approach of qualitative and quantitative analysis. Two years’ Mandarin and English data are examined, and comparisons are made with age-matched monolinguals in each language in CHILDES. To the author’s best knowledge, this study is the first of this kind examining a Mandarin-English bilingual child's bilingual development at a critical age, in different input patterns, and in different environmental languages (Lε). It also expands the scope of the theory of Lε, adding empirical evidence on the relationship between input and Lε in bilingual acquisition.Keywords: bilingual development, age, input, environmental language (Le)
Procedia PDF Downloads 1501316 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production
Authors: G. Bravo, D. Lopez, A. Iriarte
Abstract:
Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production
Procedia PDF Downloads 2711315 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay
Authors: Anderson Braga Mendes
Abstract:
Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance
Procedia PDF Downloads 2741314 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations
Authors: Asmamaw Yehun
Abstract:
The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 701313 Men of Congress in Today’s Brazil: Ethnographic Notes on Neoliberal Masculinities in Support of Bolsonaro
Authors: Joao Vicente Pereira Fernandez
Abstract:
In the context of a democratic crisis, a new wave of authoritarianism prompts domineering male figures to leadership posts worldwide. Although the gendered aspect of this phenomenon has been reasonably documented, recent studies have focused on high-level commanding posts, such as those of president and prime-minister, leaving other positions of political power with limited attention. This natural focus of investigation, however powerful, seems to have restricted our understanding of the phenomenon by precluding a more thorough inquiry of its gendered aspects and its consequences for political representation as a whole. Trying to fill this gap, in recent research, we examined the election results of Jair Bolsonaro’s party for the Legislative Branch in 2018. We found that the party's proportion of non-male representatives was on average, showing it provided reasonable access of women to the legislature in a comparative perspective. However, and perhaps more intuitively, we also found that the elected members of Bolsonaro’s party performed very gendered roles, which allowed us to draw the first lines of the representative profiles gathered around the new-right in Brazil. These results unveiled new horizons for further research, addressing topics that range from the role of women for the new-right on Brazilian institutional politics to the relations between these profiles of representatives, their agendas, and political and electoral strategies. This article aims to deepen the understanding of some of these profiles in order to lay the groundwork for the development of the second research agenda mentioned above. More specifically, it focuses on two out of the three profiles that were grasped predominantly, if not entirely, from masculine subjects during our last research, with the objective of portraying the masculinity standards mobilized and promoted by them. These profiles –the entrepreneur and the army man – were chosen to be developed due to their proximity to both liberal and authoritarian views, and, moreover, because they can possibly represent two facets of the new-right that were integrated in a certain way around Bolsonaro in 2018, but that can be reworked in the future. After a brief introduction of the literature on masculinity and politics in times of democratic crisis, we succinctly present the relevant results of our previous research and then describe these two profiles and their masculinities in detail. We adopt a combination of ethnography and discourse analysis, methods that allow us to make sense of the data we collected on our previous research as well as of the data gathered for this article: social media posts and interactions between the elected members that inspired these profiles and their supporters. Finally, we discuss our results, presenting our main argument on how these descriptions provide a further understanding of the gendered aspect of liberal authoritarianism, from where to better apprehend its political implications in Brazil.Keywords: Brazilian politics, gendered politics, masculinities, new-right
Procedia PDF Downloads 1211312 Quantitative Analysis of the High-Value Bioactive Components of Pre-Germinated and Germinated Pigmented Rice (Oryza sativa L. Cv. Superjami and Superhongmi)
Authors: Lara Marie Pangan Lo, Soo Im Chung, Yao Cheng Zhang, Xingyue Jin, Mi Young Kang
Abstract:
Being the world’s most consumed grain crop, rice (Oryza sativa L.) demands’ have increase and this prompted the development of new rice cultivars with high bio-functional properties than the commonly used white rice. Ordinary rice variety is already known to be a potential source for a number of nutritional as well as bioactive compounds. To further enhance the rice’s nutritive values, germination is done in addition to making it more tasty and palatable when cooked. Pigmented rice, on the other hand, has become increasingly popular in the recent years for their greater antioxidant potential and other nutraceutical properties which can help alleviate the occurrence of the increasing incidence of metabolic diseases. Combining these two (2) parameters, this research study is sought to quantitatively determine the pre-germinated and germinated quantities of the major bioactive compounds of South Korea’s newly developed purplish pigmented rice grain cultivar Superjami (SJ) and red pigmented rice grain Superhongmi (SH) and compare them against the non-pigmented Normal Brown (NB) rice variety. Powdered rice grain cultivars were subjected to 72-hour germination period and the quantities of GABA, γ-oryzanol, ferulic acid, tocopherol and tocotrienol homologues were compared against their pre-germinated condition using γ- amino butyric acid (GABA) analysis and High Performance Liquid Chromatography (HPLC). Results revealed the effectiveness of germination in enhancing the bioactive components in all rice samples. GABA contents in germinated rice cultivars increased by more than 10-fold following the order: SJ >SH >NB. In addition, purple rice variety (SJ) has higher total γ-oryzanol and ferulic acid contents which increased by > 2-fold after germination followed by the red cultivar SH then the control, NB. Germinated varieties also possess higher total tocotrienol content than their pre-germinated state. As for the total tocopherol content, SJ has higher quantity, but the red-pigmented SH (0.16 mg/kg) is shown to have lower total tocopherol content than the control rice NB (0.86 mg/kg). However, all tocopherol and tocotrienol homologues were present only in small amounts ( < 3.0 mg/kg) in all pre-germinated and germinated samples. In general, all of the analyzed pigmented rice cultivars were found to possess higher bioactive compounds than the control NB rice variety. Also, regardless of their strain, germinated rice samples have higher bioactive compounds than their pre-germinated counterparts. This only shows the effectiveness of germinating rice in enhancing bioactive constituents. Overall, these results suggest the potential of the pigmented rice varieties as natural source of nutraceuticals in bio-functional food development.Keywords: bioactive compounds, germinated rice, superhongmi, superjami
Procedia PDF Downloads 4011311 Modelling and Assessment of an Off-Grid Biogas Powered Mini-Scale Trigeneration Plant with Prioritized Loads Supported by Photovoltaic and Thermal Panels
Authors: Lorenzo Petrucci
Abstract:
This paper is intended to give insight into the potential use of small-scale off-grid trigeneration systems powered by biogas generated in a dairy farm. The off-grid plant object of analysis comprises a dual-fuel Genset as well as electrical and thermal storage equipment and an adsorption machine. The loads are the different apparatus used in the dairy farm, a household where the workers live and a small electric vehicle whose batteries can also be used as a power source in case of emergency. The insertion in the plant of an adsorption machine is mainly justified by the abundance of thermal energy and the simultaneous high cooling demand associated with the milk-chilling process. In the evaluated operational scenario, our research highlights the importance of prioritizing specific small loads which cannot sustain an interrupted supply of power over time. As a consequence, a photovoltaic and thermal panel is included in the plant and is tasked with providing energy independently of potentially disruptive events such as engine malfunctioning or scarce and unstable supplies of fuels. To efficiently manage the plant an energy dispatch strategy is created in order to control the flow of energy between the power sources and the thermal and electric storages. In this article we elaborate on models of the equipment and from these models, we extract parameters useful to build load-dependent profiles of the prime movers and storage efficiencies. We show that under reasonable assumptions the analysis provides a sensible estimate of the generated energy. The simulations indicate that a Diesel Generator sized to a value 25% higher than the total electrical peak demand operates 65% of the time below the minimum acceptable load threshold. To circumvent such a critical operating mode, dump loads are added through the activation and deactivation of small resistors. In this way, the excess of electric energy generated can be transformed into useful heat. The combination of PVT and electrical storage to support the prioritized load in an emergency scenario is evaluated in two different days of the year having the lowest and highest irradiation values, respectively. The results show that the renewable energy component of the plant can successfully sustain the prioritized loads and only during a day with very low irradiation levels it also needs the support of the EVs’ battery. Finally, we show that the adsorption machine can reduce the ice builder and the air conditioning energy consumption by 40%.Keywords: hybrid power plants, mathematical modeling, off-grid plants, renewable energy, trigeneration
Procedia PDF Downloads 1761310 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 2901309 Effects of Oxytocin on Neural Response to Facial Emotion Recognition in Schizophrenia
Authors: Avyarthana Dey, Naren P. Rao, Arpitha Jacob, Chaitra V. Hiremath, Shivarama Varambally, Ganesan Venkatasubramanian, Rose Dawn Bharath, Bangalore N. Gangadhar
Abstract:
Objective: Impaired facial emotion recognition is widely reported in schizophrenia. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. However, its effect on facial emotion recognition deficits seen in schizophrenia is not well explored. In this study, we examined the effect of intranasal OXT on processing facial emotions and its neural correlates in patients with schizophrenia. Method: 12 male patients (age= 31.08±7.61 years, education= 14.50±2.20 years) participated in this single-blind, counterbalanced functional magnetic resonance imaging (fMRI) study. All participants underwent three fMRI scans; one at baseline, one each after single dose 24IU intranasal OXT and intranasal placebo. The order of administration of OXT and placebo were counterbalanced and subject was blind to the drug administered. Participants performed a facial emotion recognition task presented in a block design with six alternating blocks of faces and shapes. The faces depicted happy, angry or fearful emotions. The images were preprocessed and analyzed using SPM 12. First level contrasts comparing recognition of emotions and shapes were modelled at individual subject level. A group level analysis was performed using the contrasts generated at the first level to compare the effects of intranasal OXT and placebo. The results were thresholded at uncorrected p < 0.001 with a cluster size of 6 voxels. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. Results: Compared to placebo, intranasal OXT attenuated activity in inferior temporal, fusiform and parahippocampal gyri (BA 20), premotor cortex (BA 6), middle frontal gyrus (BA 10) and anterior cingulate gyrus (BA 24) and enhanced activity in the middle occipital gyrus (BA 18), inferior occipital gyrus (BA 19), and superior temporal gyrus (BA 22). There were no significant differences between the conditions on the accuracy scores of emotion recognition between baseline (77.3±18.38), oxytocin (82.63 ± 10.92) or Placebo (76.62 ± 22.67). Conclusion: Our results provide further evidence to the modulatory effect of oxytocin in patients with schizophrenia. Single dose oxytocin resulted in significant changes in activity of brain regions involved in emotion processing. Future studies need to examine the effectiveness of long-term treatment with OXT for emotion recognition deficits in patients with schizophrenia.Keywords: recognition, functional connectivity, oxytocin, schizophrenia, social cognition
Procedia PDF Downloads 2201308 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971307 CSPG4 Molecular Target in Canine Melanoma, Osteosarcoma and Mammary Tumors for Novel Therapeutic Strategies
Authors: Paola Modesto, Floriana Fruscione, Isabella Martini, Simona Perga, Federica Riccardo, Mariateresa Camerino, Davide Giacobino, Cecilia Gola, Luca Licenziato, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari
Abstract:
Canine and human melanoma, osteosarcoma (OSA), and mammary carcinomas are aggressive tumors with common characteristics making dogs a good model for comparative oncology. Novel therapeutic strategies against these tumors could be useful to both species. In humans, chondroitin sulphate proteoglycan 4 (CSPG4) is a marker involved in tumor progression and could be a candidate target for immunotherapy. The anti-CSPG4 DNA electrovaccination has shown to be an effective approach for canine malignant melanoma (CMM) [1]. An immunohistochemistry evaluation of CSPG4 expression in tumour tissue is generally performed prior to electrovaccination. To assess the possibility to perform a rapid molecular evaluation and in order to validate these spontaneous canine tumors as the model for human studies, we investigate the CSPG4 gene expression by RT qPCR in CMM, OSA, and canine mammary tumors (CMT). The total RNA was extracted from RNAlater stored tissue samples (CMM n=16; OSA n=13; CMT n=6; five paired normal tissues for CMM, five paired normal tissues for OSA and one paired normal tissue for CMT), retro-transcribed and then analyzed by duplex RT-qPCR using two different TaqMan assays for the target gene CSPG4 and the internal reference gene (RG) Ribosomal Protein S19 (RPS19). RPS19 was selected from a panel of 9 candidate RGs, according to NormFinder analysis following the protocol already described [2]. Relative expression was analyzed by CFX Maestro™ Software. Student t-test and ANOVA were performed (significance set at P<0.05). Results showed that gene expression of CSPG4 in OSA tissues is significantly increased by 3-4 folds when compared to controls. In CMT, gene expression of the target was increased from 1.5 to 19.9 folds. In melanoma, although an increasing trend was observed, no significant differences between the two groups were highlighted. Immunohistochemistry analysis of the two cancer types showed that the expression of CSPG4 within CMM is concentrated in isles of cells compared to OSA, where the distribution of positive cells is homogeneous. This evidence could explain the differences in gene expression results.CSPG4 immunohistochemistry evaluation in mammary carcinoma is in progress. The evidence of CSPG4 expression in a different type of canine tumors opens the way to the possibility of extending the CSPG4 immunotherapy marker in CMM, OSA, and CMT and may have an impact to translate this strategy modality to human oncology.Keywords: canine melanoma, canine mammary carcinomas, canine osteosarcoma, CSPG4, gene expression, immunotherapy
Procedia PDF Downloads 1741306 Didacticization of Code Switching as a Tool for Bilingual Education in Mali
Authors: Kadidiatou Toure
Abstract:
Mali has started experimentation of teaching the national languages at school through the convergent pedagogy in 1987. Then, it is in 1994 that it will become widespread with eleven of the thirteen former national languages used at primary school. The aim was to improve the Malian educational system because the use of French as the only medium of instruction was considered a contributing factor to the significant number of student dropouts and the high rate of repetition. The Convergent pedagogy highlights the knowledge acquired by children at home, their vision of the world and especially the knowledge they have of their mother tongue. That pedagogy requires the use of a specific medium only during classroom practices and teachers have been trained in this sense. The specific medium depends on the learning content, which sometimes is French, other times, it is the national language. Research has shown that bilingual learners do not only use the required medium in their learning activities, but they code switch. It is part of their learning processes. Currently, many scholars agree on the importance of CS in bilingual classes, and teachers have been told about the necessity of integrating it into their classroom practices. One of the challenges of the Malian bilingual education curriculum is the question of ‘effective languages management’. Theoretically, depending on the classrooms, an average have been established for each of the involved language. Following that, teachers make use of CS differently, sometimes, it favors the learners, other times, it contributes to the development of some linguistic weaknesses. The present research tries to fill that gap through a tentative model of didactization of CS, which simply means the practical management of the languages involved in the bilingual classrooms. It is to know how to use CS for effective learning. Moreover, the didactization of CS tends to sensitize the teachers about the functional role of CS so that they may overcome their own weaknesses. The overall goal of this research is to make code switching a real tool for bilingual education. The specific objectives are: to identify the types of CS used during classroom activities to present the functional role of CS for the teachers as well as the pupils. to develop a tentative model of code-switching, which will help the teachers in transitional classes of bilingual schools to recognize the appropriate moment for making use of code switching in their classrooms. The methodology adopted is a qualitative one. The study is based on recorded videos of teachers of 3rd year of primary school during their classroom activities and interviews with the teachers in order to confirm the functional role of CS in bilingual classes. The theoretical framework adopted is the typology of CS proposed by Poplack (1980) to identify the types of CS used. The study reveals that teachers need to be trained on the types of CS and the different functions they assume and on the consequences of inappropriate use of language alternation.Keywords: bilingual curriculum, code switching, didactization, national languages
Procedia PDF Downloads 711305 Operation Cycle Model of ASz62IR Radial Aircraft Engine
Authors: M. Duk, L. Grabowski, P. Magryta
Abstract:
Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, underKeywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine
Procedia PDF Downloads 2921304 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 2181303 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1191302 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 1741301 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 1181300 Design, Simulation and Fabrication of Electro-Magnetic Pulse Welding Coil and Initial Experimentation
Authors: Bharatkumar Doshi
Abstract:
Electro-Magnetic Pulse Welding (EMPW) is a solid state welding process carried out at almost room temperature, in which joining is enabled by high impact velocity deformation. In this process, high voltage capacitor’s stored energy is discharged in an EM coil resulting in a damped, sinusoidal current with an amplitude of several hundred kiloamperes. Due to these transient magnetic fields of few tens of Tesla near the coil is generated. As the conductive (tube) part is positioned in this area, an opposing eddy current is induced in this part. Consequently, high Lorentz forces act on the part, leading to acceleration away from the coil. In case of a tube, it gets compressed under forming velocities of more than 300 meters per second. After passing the joining gap it collides with the second metallic joining rod, leading to the formation of a jet under appropriate collision conditions. Due to the prevailing high pressure, metallurgical bonding takes place. A characteristic feature is the wavy interface resulting from the heavy plastic deformations. In the process, the formation of intermetallic compounds which might deteriorate the weld strength can be avoided, even for metals with dissimilar thermal properties. In order to optimize the process parameters like current, voltage, inductance, coil dimensions, workpiece dimensions, air gap, impact velocity, effective plastic strain, shear stress acting in the welding zone/impact zone etc. are very critical and important to establish. These process parameters could be determined by simulation using Finite Element Methods (FEM) in which electromagnetic –structural couple field analysis is performed. The feasibility of welding could thus be investigated by varying the parameters in the simulation using COMSOL. Simulation results shall be applied in performing the preliminary experiments of welding the different alloy steel tubes and/or alloy steel to other materials. The single turn coil (S.S.304) with field shaper (copper) has been designed and manufactured. The preliminary experiments are performed using existing EMPW facility available Institute for Plasma Research, Gandhinagar, India. The experiments are performed at 22kV charged into 64µF capacitor bank and the energy is discharged into single turn EM coil. Welding of axi-symetric components such as aluminum tube and rod has been proven experimentally using EMPW techniques. In this paper EM coil design, manufacturing, Electromagnetic-structural FEM simulation of Magnetic Pulse Welding and preliminary experiment results is reported.Keywords: COMSOL, EMPW, FEM, Lorentz force
Procedia PDF Downloads 1841299 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 3661298 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation
Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans
Abstract:
The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.Keywords: CO2, intensive care, plastic, transport
Procedia PDF Downloads 1781297 No-Par Shares Working in European LLCs
Authors: Agnieszka P. Regiec
Abstract:
Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test
Procedia PDF Downloads 1841296 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 1361295 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 541294 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals
Authors: Shirin Alabdulqader
Abstract:
This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals
Procedia PDF Downloads 1311293 Argos System: Improvements and Future of the Constellation
Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard
Abstract:
Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services
Procedia PDF Downloads 1821292 Blood Microbiome in Different Metabolic Types of Obesity
Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Dilyara R. Khusnutdinova, Dilyara R. Kamaldinova, Alexander V. Shestopalov
Abstract:
Background. Obese patients have unequal risks of metabolic disorders. It is accepted to distinguish between metabolically healthy obesity (MHO) and metabolically unhealthy obesity (MUHO). MUHO patients have a high risk of metabolic disorders, insulin resistance, and diabetes mellitus. Among the other things, the gut microbiota also contributes to the development of metabolic disorders in obesity. Obesity is accompanied by significant changes in the gut microbial community. In turn, bacterial translocation from the intestine is the basis for the blood microbiome formation. The aim was to study the features of the blood microbiome in patients with various metabolic types of obesity. Patients, materials, methods. The study included 116 healthy donors and 101 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=36) and MUHO (n=53). Quantitative and qualitative assessment of the blood microbiome was based on metagenomic analysis. Blood samples were used to isolate DNA and perform sequencing of the variable v3-v4 region of the 16S rRNA gene. Alpha diversity indices (Simpson index, Shannon index, Chao1 index, phylogenetic diversity, the number of observed operational taxonomic units) were calculated. Moreover, we compared taxa (phyla, classes, orders, and families) in terms of isolation frequency and the taxon share in the total bacterial DNA pool between different patient groups. Results. In patients with MHO, the characteristics of the alpha-diversity of the blood microbiome were like those of healthy donors. However, MUHO was associated with an increase in all diversity indices. The main phyla of the blood microbiome were Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria. Cyanobacteria, TM7, Thermi, Verrucomicrobia, Chloroflexi, Acidobacteria, Planctomycetes, Gemmatimonadetes, and Tenericutes were found to be less significant phyla of the blood microbiome. Phyla Acidobacteria, TM7, and Verrucomicrobia were more often isolated in blood samples of patients with MUHO compared with healthy donors. Obese patients had a decrease in some taxonomic ranks (Bacilli, Caulobacteraceae, Barnesiellaceae, Rikenellaceae, Williamsiaceae). These changes appear to be related to the increased diversity of the blood microbiome observed in obesity. An increase of Lachnospiraceae, Succinivibrionaceae, Prevotellaceae, and S24-7 was noted for MUHO patients, which, apparently, is explained by a magnification in intestinal permeability. Conclusion. Blood microbiome differs in obese patients and healthy donors at class, order, and family levels. Moreover, the nature of the changes is determined by the metabolic type of obesity. MUHO linked to increased diversity of the blood microbiome. This appears to be due to increased microbial translocation from the intestine and non-intestinal sources.Keywords: blood microbiome, blood bacterial DNA, obesity, metabolically healthy obesity, metabolically unhealthy obesity
Procedia PDF Downloads 1641291 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities
Authors: Hind Alshoubaki
Abstract:
The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’Keywords: refugee, refugee camp, temporary, Zaatari
Procedia PDF Downloads 1331290 Pixel Façade: An Idea for Programmable Building Skin
Authors: H. Jamili, S. Shakiba
Abstract:
Today, one of the main concerns of human beings is facing the unpleasant changes of the environment. Buildings are responsible for a significant amount of natural resources consumption and carbon emissions production. In such a situation, this thought comes to mind that changing each building into a phenomenon of benefit to the environment. A change in a way that each building functions as an element that supports the environment, and construction, in addition to answering the need of humans, is encouraged, the way planting a tree is, and it is no longer seen as a threat to alive beings and the planet. Prospect: Today, different ideas of developing materials that can smartly function are realizing. For instance, Programmable Materials, which in different conditions, can respond appropriately to the situation and have features of modification in shape, size, physical properties and restoration, and repair quality. Studies are to progress having this purpose to plan for these materials in a way that they are easily available, and to meet this aim, there is no need to use expensive materials and high technologies. In these cases, physical attributes of materials undertake the role of sensors, wires and actuators then materials will become into robots itself. In fact, we experience robotics without robots. In recent decades, AI and technology advances have dramatically improving the performance of materials. These achievements are a combination of software optimizations and physical productions such as multi-materials 3D printing. These capabilities enable us to program materials in order to change shape, appearance, and physical properties to interact with different situations. nIt is expected that further achievements like Memory Materials and Self-learning Materials are also added to the Smart Materials family, which are affordable, available, and of use for a variety of applications and industries. From the architectural standpoint, the building skin is significantly considered in this research, concerning the noticeable surface area the buildings skin have in urban space. The purpose of this research would be finding a way that the programmable materials be used in building skin with the aim of having an effective and positive interaction. A Pixel Façade would be a solution for programming a building skin. The Pixel Facadeincludes components that contain a series of attributes that help buildings for their needs upon their environmental criteria. A PIXEL contains series of smart materials and digital controllers together. It not only benefits its physical properties, such as control the amount of sunlight and heat, but it enhances building performance by providing a list of features, depending on situation criteria. The features will vary depending on locations and have a different function during the daytime and different seasons. The primary role of a PIXEL FAÇADE can be defined as filtering pollutions (for inside and outside of the buildings) and providing clean energy as well as interacting with other PIXEL FACADES to estimate better reactions.Keywords: building skin, environmental crisis, pixel facade, programmable materials, smart materials
Procedia PDF Downloads 891289 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses
Authors: Orit Baruth, Anat Cohen
Abstract:
Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions
Procedia PDF Downloads 1031288 Structural Analysis of Archaeoseismic Records Linked to the 5 July 408 - 410 AD Utica Strong Earthquake (NE Tunisia)
Authors: Noureddine Ben Ayed, Abdelkader Soumaya, Saïd Maouche, Ali Kadri, Mongi Gueddiche, Hayet Khayati-Ammar, Ahmed Braham
Abstract:
The archaeological monument of Utica, located in north-eastern Tunisia, was founded (8th century BC) By the Phoenicians as a port installed on the trade route connecting Phoenicia and the Straits of Gibraltar in the Mediterranean Sea. The flourishment of this city as an important settlement during the Roman period was followed by a sudden abandonment, disuse and progressive oblivion in the first half of the fifth century AD. This decadence can be attributed to the destructive earthquake of 5 July 408 - 410 AD, affecting this historic city as documented in 1906 by the seismologist Fernand De Montessus De Ballore. The magnitude of the Utica earthquake was estimated at 6.8 by the Tunisian National Institute of Meteorology (INM). In order to highlight the damage caused by this earthquake, a field survey was carried out at the Utica ruins to detect and analyse the earthquake archaeological effects (EAEs) using structural geology methods. This approach allowed us to highlight several structural damages, including: (1) folded mortar pavements, (2) cracks affecting the mosaic and walls of a water basin in the "House of the Grand Oecus", (3) displaced columns, (4) block extrusion in masonry walls, (5) undulations in mosaic pavements, (6) tilted walls. The structural analysis of these EAEs and data measurements reveal a seismic cause for all evidence of deformation in the Utica monument. The maximum horizontal strain of the ground (e.g. SHmax) inferred from the building oriented damage in Utica shows a NNW-SSE direction under a compressive tectonic regime. For the seismogenic source of this earthquake, we propose the active E-W to NE-SW trending Utique - Ghar El Melh reverse fault, passing through the Utica Monument and extending towards the Ghar El Melh Lake, as the causative tectonic structure. The active fault trace is well supported by instrumental seismicity, geophysical data (e.g., gravity, seismic profiles) and geomorphological analyses. In summary, we find that the archaeoseismic records detected at Utica are similar to those observed at many other archaeological sites affected by destructive ancient earthquakes around the world. Furthermore, the calculated orientation of the average maximum horizontal stress (SHmax) closely match the state of the actual stress field, as highlighted by some earthquake focal mechanisms in this region.Keywords: Tunisia, utica, seimogenic fault, archaeological earthquake effects
Procedia PDF Downloads 45