Search results for: Radial Basis Functions (RBF) neural networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9213

Search results for: Radial Basis Functions (RBF) neural networks

693 Cross-Sectional Analysis of the Health Product E-Commerce Market in Singapore

Authors: Andrew Green, Jiaming Liu, Kellathur Srinivasan, Raymond Chua

Abstract:

Introduction: The size of Singapore’s online health product (HP) market (e-commerce) is largely unknown. However, it is recognized that a large majority comes from overseas and thus, unregulated. As buying HP from unauthorized sources significantly compromises public health safety, understanding e-commerce users’ demographics and their perceptions on online HP purchasing becomes a pivotal first step to form a basis for recommendations in Singapore’s pharmacovigilance efforts. Objective: To assess the prevalence of online HP purchasing behaviour among Singaporean e-commerce users. Methodology: This is a cross-sectional study targeting Singaporean e-commerce users recruited from various local websites and online forums. Participants were not randomized into study arms but instead stratified by random sampling method based on participants’ age. A self-administered anonymous questionnaire was used to explore participants' demographics, online HP purchasing behaviour, knowledge and attitude. The association of different variables with online HP purchasing behaviour was analysed using logistic regression statistics. Main outcome measures: Prevalence of HP e-commerce users in Singapore (%) and variables that contribute to the prevalence (adjusted prevalent ratio). Results: The study recruited 372 complete and valid responses. The prevalence of online HP consumers among e-commerce users in Singapore is estimated to be 55.9% (1.7 million consumers). Online purchasing of complementary HP (46.9%) was the most prevalent, followed by medical devices (21.6%) and Western medicine (20.5%). Multivariate analysis showed that age is an independent variable that correlates with the likelihood of buying HP online. The prevalence of HP e-commerce users is highest in the 35-44 age group (64.1%) and lowest among the 16-24 age group (36.4%). The most bought HP through the internet are vitamins and minerals (21.5%), non-herbal (15.9%), herbal (13.9%), weight loss (8.7%) and sports (8.4%) supplements. While the top 3 products are distributed equally between the genders, there is a skew towards female respondents (12.4% in females vs. 4.9% in males) for weight loss supplements and towards males (13.2% in males vs. 3.7% in females) for sports supplements. Even though online consumers are in the younger age brackets, our study found that up to 72.0% of HP bought online are bought for others (buyer’s family and/or friends). Multivariate analysis showed a statistically significant association between purchasing HP through online means and the perceptions that 'internet is safe' (adjusted Prevalence Ratio=1.15, CI 1.03-1.28), 'buying HP online is time saving' (PR=1.17, CI 1.01-1.36), and 'recognition of HP brand' (PR=1.21 CI 1.06-1.40). Conclusions: This study has provided prevalence data for online HP market in Singapore, and has allowed the country’s regulatory body to formulate a targeted pharmacovigilance approach to this growing problem.

Keywords: e-commerce, pharmaceuticals, pharmacovigilance, Singapore

Procedia PDF Downloads 340
692 How to “Eat” without Actually Eating: Marking Metaphor with Spanish Se and Italian Si

Authors: Cinzia Russi, Chiyo Nishida

Abstract:

Using data from online corpora (Spanish CREA, Italian CORIS), this paper examines the relatively understudied use of Spanish se and Italian si exemplified in (1) and (2), respectively. (1) El rojo es … el que se come a los demás. ‘The red (bottle) is the one that outshines/*eats the rest.’(2) … ebbe anche la saggezza di mangiarsi tutto il suo patrimonio. ‘… he even had the wisdom to squander/*eat all his estate.’ In these sentences, se/si accompanies the consumption verb comer/mangiare ‘to eat’, without which the sentences would not be interpreted appropriately. This se/si cannot readily be attributed to any of the multiple functions so far identified in the literature: reflexive, ergative, middle/passive, inherent, benefactive, and complete consumptive. In particular, this paper argues against the feasibility of a recent construction-based analysis of sentences like (1) and (2), which situates se/si within a prototype-based network of meanings all deriving from the central meaning of 'COMPLETE CONSUMPTION' (e.g., Alice se comió toda la torta/Alicesi è mangiata tutta la torta ‘John ate the whole cake’). Clearly, the empirical adequacy of such an account is undermined by the fact that the events depicted in the se/si-sentences at issue do not always entail complete consumption because they may lack an INCREMENTAL THEME, the distinguishing property of complete consumption. Alternatively, it is proposed that the sentences under analysis represent instances of verbal METAPHORICAL EXTENSION: se/si represents an explicit marker of this cognitive process, which has independently developed from the complete consumptive se/si, and the meaning extension is captured by the general tenets of Conceptual Metaphor Theory (CMT). Two conceptual domains, Source (DS) and target (DT), are related by similarity, assigning an appropriate metaphorical interpretation to DT. The domains paired here are comer/mangiare (DS) and comerse/mangiarsi (DT). The eating event (DS) involves (a) the physical process of xEATER grinding yFOOD-STUFF into pieces and swallowing it; and (b) the aspect of xEATER savoring yFOOD-STUFF and being nurtured by it. In the physical act of eating, xEATER has dominance and exercises his force over yFOOD-STUFF. This general sense of dominance and force is mapped onto DT and is manifested in the ways exemplified in (1) and (2), and many others. According to CMT, two other properties are observed in each pair of DS & DT. First, DS tends to be more physical and concrete and DT more abstract, and systematic mappings are established between constituent elements in DS and those in DT: xEATER corresponds to the element that destroys and yFOOD-STUFF to the element that is destroyed in DT, as exemplified in (1) and (2). Though the metaphorical extension marker se/si appears by far most frequently with comer/mangiare in the corpora, similar systematic mappings are observed in several other verb pairs, for example, jugar/giocare ‘to play (games)’ and jugarse/giocarsi ‘to jeopardize/risk (life, reputation, etc.)’, perder/perdere ‘to lose (an object)’ and perderse/perdersi ‘to miss out on (an event)’, etc. Thus, this study provides evidence that languages may indeed formally mark metaphor using means available to them.

Keywords: complete consumption value, conceptual metaphor, Italian si/Spanish se, metaphorical extension.

Procedia PDF Downloads 30
691 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?

Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria

Abstract:

Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.

Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state

Procedia PDF Downloads 284
690 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program

Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun

Abstract:

A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.

Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system

Procedia PDF Downloads 183
689 Environmental Impact of a New-Build Educational Building in England: Life-Cycle Assessment as a Method to Calculate Whole Life Carbon Emissions

Authors: Monkiz Khasreen

Abstract:

In the context of the global trend towards reducing new buildings carbon footprint, the design team is required to make early decisions that have a major influence on embodied and operational carbon. Sustainability strategies should be clear during early stages of building design process, as changes made later can be extremely costly. Life-Cycle Assessment (LCA) could be used as the vehicle to carry other tools and processes towards achieving the requested improvement. Although LCA is the ‘golden standard’ to evaluate buildings from 'cradle to grave', lack of details available on the concept design makes LCA very difficult, if not impossible, to be used as an estimation tool at early stages. Issues related to transparency and accessibility of information in the building industry are affecting the credibility of LCA studies. A verified database derived from LCA case studies is required to be accessible to researchers, design professionals, and decision makers in order to offer guidance on specific areas of significant impact. This database could be the build-up of data from multiple sources within a pool of research held in this context. One of the most important factors that affects the reliability of such data is the temporal factor as building materials, components, and systems are rapidly changing with the advancement of technology making production more efficient and less environmentally harmful. Recent LCA studies on different building functions, types, and structures are always needed to update databases derived from research and to form case bases for comparison studies. There is also a need to make these studies transparent and accessible to designers. The work in this paper sets out to address this need. This paper also presents life-cycle case study of a new-build educational building in England. The building utilised very current construction methods and technologies and is rated as BREEAM excellent. Carbon emissions of different life-cycle stages and different building materials and components were modelled. Scenario and sensitivity analyses were used to estimate the future of new educational buildings in England. The study attempts to form an indicator during the early design stages of similar buildings. Carbon dioxide emissions of this case study building, when normalised according to floor area, lie towards the lower end of the range of worldwide data reported in the literature. Sensitivity analysis shows that life cycle assessment results are highly sensitive to future assumptions made at the design stage, such as future changes in electricity generation structure over time, refurbishment processes and recycling. The analyses also prove that large savings in carbon dioxide emissions can result from very small changes at the design stage.

Keywords: architecture, building, carbon dioxide, construction, educational buildings, England, environmental impact, life-cycle assessment

Procedia PDF Downloads 99
688 Towards a More Inclusive Society: A Study on the Assimilation and Integration of the Migrant Children in Kerala

Authors: Arun Perumbilavil Anand

Abstract:

For the past few years, the state of Kerala has been witnessing a large inflow of migrant workers from other states of the country, which emerged as a result of demographic transition and Gulf emigration. The in-migration patterns in Kerala have changed over the time with the migrants having a higher residence history bringing their families to the state, thereby making the process more complicated and divergent in its approach. These developments have led to an increase in the young migrant population at least in some parts of the state, which has opened up doubts and questions related to their future in the host society. At this juncture, the study ponders into the factors that are associated with the assimilation and wellbeing of migrant children in the society of Kerala. As one of the objectives, the study also analyzed the influence and role played by the educational institutions (both public and private) in meeting the needs and aspirations of both the children and their parents. The study gains significance as it tries to identify various impediments that hinder the cognitive skill formation and behaviour patterns of the migrant children in the host society. Data and Methodology: The study is based on the primary data collected through a series of interviews and interactions held with parents, children, and teachers of different educational institutions, including both public and private. The primary survey also made use of research techniques like observation, in-depth interviews, and case study method. The study was conducted in schools in the Kanjikode area of the Palakkad district in Kerala. The findings of the study are on the basis of a survey conducted in four schools and 40 migrant children. Findings: The study found that majority of the children have wholly integrated and assimilated into the host society. The influence of the peer group was quite visible in giving stimulus to the assimilation process. Most of the children do not have any emotional or cultural sentiments attached to their state of origin, and they consider Kerala as their ‘home state’ and the local language (Malayalam) as their ‘mother tongue'. The study could also find that the existing education system in the host society fails to meet the needs and aspirations of migrants as well as that of their children. On a comparative scale, to some extent, private schools have succeeded in fulfiling the special requirements of the migrant children. An interesting point that the study could pinpoint at is that the children of the migrants show better health conditions and wellbeing than compared to the natives, which is usually addressed as an epidemiologic paradox. As a concluding remark, the study recommends the inclusion concept of inclusive education into the education system of the state with giving due emphasis on those who are at higher risk of being excluded or marginalized, along with fostering increased interaction between diverse groups.

Keywords: assimilation, Kerala, migrant children, well-being

Procedia PDF Downloads 153
687 Marketization of Higher Education in the UK and Its Impacts on Teaching Practitioners

Authors: Hossein Rezaie

Abstract:

Academic institutions, esp. universities, have been known as cradles of learning and teaching great thinkers while creating the type of knowledge that is supposed to be bereft of utilitarian motives. Nonetheless, it seems that such intellectual centers have entered into a competition with each other for attracting the attention of potential clients. The traditional values of (higher) education such as nurturing criticality and fostering intellectuality in students have been replaced with strategic planning, quality assurance, performance assessment, and academic audits. Not being immune from the whims and wishes of marketization, the system of higher education in the UK has been recalibrated by policy makers to address the demand and supply of student education, academic research and other university activities on the basis of monetary factors. As an immediate example in this vein, the Russell Group in the UK, which is comprised of 24 leading UK research universities, has explicitly expressed it policy on its official website as follows: ‘Russell Group universities are global businesses competing for staff, students and funding with the best in the world’. Furthermore, certain attempts have been made to corporatize the system of HE which have been manifested in remodeling of university governing bodies on corporate lines and developing measurement scales for indicating the performance of teaching practitioners. Nevertheless, it seems that such structural changes in policies toward the system of HE have bearing on the practices of practitioners and educators as well as the identity of students who are the customers of educational services. The effects of marketization have been examined mainly in terms of students’ perceptions and motivation, institutional policies and university management. However, the teaching practitioner side seems to be an under-studied area with regard to any changes in its expectations, satisfaction and perception of professional identity in the aftermath of introducing market-wise values into HE of the UK. As a result, this research aims to investigate the possible outcomes of market-driven values on the practitioner side of HE in the UK and finally seeks to address the following research questions: 1-How is the change in the mission of HE in the UK reflected in institutional documents? 1-A- How is the change of mission represented in job adverts? 1-B- How is the change of mission represented in university prospectuses? 2-How are teaching practitioners represented regarding their roles and obligations in the prospectuses and job ads published by UK HE institutions? In order to address these questions, the researcher will analyze 30 prospectuses and job ads published by Russel Group universities by taking Critical Discourse Analysis as his point of departure and the analytical methods of genre analysis and Systemic Functional Linguistics to probe into the generic features and representation of participants, in this case teaching practitioners, in the selected corpus.

Keywords: higher education, job advertisements, marketization of higher education, prospectuses

Procedia PDF Downloads 228
686 Pickering Dry Emulsion System for Dissolution Enhancement of Poorly Water Soluble Drug (Fenofibrate)

Authors: Nitin Jadhav, Pradeep R. Vavia

Abstract:

Poor water soluble drugs are difficult to promote for oral drug delivery as they demonstrate poor and variable bioavailability because of its poor solubility and dissolution in GIT fluid. Nowadays lipid based formulations especially self microemulsifying drug delivery system (SMEDDS) is found as the most effective technique. With all the impressive advantages, the need of high amount of surfactant (50% - 80%) is the major drawback of SMEDDS. High concentration of synthetic surfactant is known for irritation in GIT and also interference with the function of intestinal transporters causes changes in drug absorption. Surfactant may also reduce drug activity and subsequently bioavailability due to the enhanced entrapment of drug in micelles. In chronic treatment these issues are very conspicuous due to the long exposure. In addition the liquid self microemulsifying system also suffers from stability issues. Recently one novel approach of solid stabilized micro and nano emulsion (Pickering emulsion) has very admirable properties such as high stability, absence or very less concentration of surfactant and easily converts into the dry form. So here we are exploring pickering dry emulsion system for dissolution enhancement of anti-lipemic, extremely poorly water soluble drug (Fenofibrate). Oil moiety for emulsion preparation was selected mainly on the basis of higher solubility of drug. Captex 300 was showed higher solubility for fenofibrate, hence selected as oil for emulsion. With Silica (solid stabilizer); Span 20 was selected to improve the wetting property of it. Emulsion formed by Silica and Span20 as stabilizer at the ratio 2.5:1 (silica: span 20) was found very stable at the particle size 410 nm. The prepared emulsion was further preceded for spray drying and formed microcapsule evaluated for in-vitro dissolution study, in-vivo pharmacodynamic study and characterized for DSC, XRD, FTIR, SEM, optical microscopy etc. The in vitro study exhibits significant dissolution enhancement of formulation (85 % in 45 minutes) as compared to plain drug (14 % in 45 minutes). In-vivo study (Triton based hyperlipidaemia model) exhibits significant reduction in triglyceride and cholesterol with formulation as compared to plain drug indicating increasing in fenofibrate bioavailability. DSC and XRD study exhibit loss of crystallinity of drug in microcapsule form. FTIR study exhibit chemical stability of fenofibrate. SEM and optical microscopy study exhibit spherical structure of globule coated with solid particles.

Keywords: captex 300, fenofibrate, pickering dry emulsion, silica, span20, stability, surfactant

Procedia PDF Downloads 485
685 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 33
684 Gender Policies and Political Culture: An Examination of the Canadian Context

Authors: Chantal Maille

Abstract:

This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.

Keywords: Canada, gender-based analysis, gender policies, political culture

Procedia PDF Downloads 206
683 Development of Social Competence in the Preparation and Continuing Training of Adult Educators

Authors: Genute Gedviliene, Vidmantas Tutlys

Abstract:

The aim of this paper is to reveal the deployment and development of the social competence in the higher education programmes of adult education and in the continuing training and competence development of the andragogues. There will be compared how the issues of cooperation and communication in the learning and teaching processes are treated in the study programmes and in the courses of continuing training of andragogues. Theoretical and empirical research methods were combined for research analysis. For the analysis the following methods were applied: 1) Literature and document analysis helped to highlight the communication and cooperation as fundamental phenomena of the social competence, it’s important for the adult education in the context of digitalization and globalization. There were also analyzed the research studies on the development of social competence in the field of andragogy, as well as on the place and weight of the social competence in the overall competence profile of the andragogue. 2) The empirical study is based on questionnaire survey method. The population of survey consists of 240 students of bachelor and master degree studies of andragogy in Lithuania and of 320 representatives of the different bodies and institutions involved in the continuing training and professional development of the adult educators in Lithuania. The themes of survey questionnaire were defined on the basis of findings of the literature review and included the following: 1) opinions of the respondents on the role and place of a social competence in the work of andragogue; 2) opinions of the respondents on the role and place of the development of social competence in the curricula of higher education studies and continuing training courses; 3) judgements on the implications of the higher education studies and courses of continuing training for the development of social competence and it’s deployment in the work of andragogue. Data analysis disclosed a wide range of ways and modalities of the deployment and development of social competence in the preparation and continuing training of the adult educators. Social competence is important for the students and adult education providers not only as the auxiliary capability for the communication and transfer of information, but also as the outcome of collective learning leading to the development of new capabilities applied by the learners in the learning process, their professional field of adult education and their social life. Equally so, social competence is necessary for the effective adult education activities not only as an auxiliary capacity applied in the teaching process, but also as a potential for improvement, development and sustainability of the didactic competence and know-how in this field. The students of the higher education programmes in the field of adult education treat social competence as important generic capacity important for the work of adult educator, whereas adult education providers discern the concrete issues of application of social competence in the different processes of adult education, starting from curriculum design and ending with assessment of learning outcomes.

Keywords: adult education, andragogues, social competence, curriculum

Procedia PDF Downloads 126
682 Advanced Palliative Aquatics Care Multi-Device AuBento for Symptom and Pain Management by Sensorial Integration and Electromagnetic Fields: A Preliminary Design Study

Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, J.C. Mairesse Siluk, I. F. Minello, F. dos Santos de Oliveira

Abstract:

Background: Although palliative care policies and services have been developed, research in this area continues to lag. An integrated model of palliative care is suggested, which includes complementary and alternative services aimed at improving the well-being of patients and their families. The palliative aquatics care multi-device (AuBento) uses several electromagnetic techniques to decrease pain and promote well-being through relaxation and interaction among patients, specialists, and family members. Aim: The scope of this paper is to present a preliminary design study of a device capable of exploring the various existing theories on the biomedical application of magnetic fields. This will be achieved by standardizing clinical data collection with sensory integration, and adding new therapeutic options to develop an advanced palliative aquatics care, innovating in symptom and pain management. Methods: The research methodology was based on the Work Package Methodology for the development of projects, separating the activities into seven different Work Packages. The theoretical basis was carried out through an integrative literature review according to the specific objectives of each Work Package and provided a broad analysis, which, together with the multiplicity of proposals and the interdisciplinarity of the research team involved, generated consistent and understandable complex concepts in the biomedical application of magnetic fields for palliative care. Results: Aubento ambience was idealized with restricted electromagnetic exposure (avoiding data collection bias) and sensory integration (allowing relaxation associated with hydrotherapy, music therapy, and chromotherapy or like floating tank). This device has a multipurpose configuration enabling classic or exploratory options on the use of the biomedical application of magnetic fields at the researcher's discretion. Conclusions: Several patients in diverse therapeutic contexts may benefit from the use of magnetic fields or fluids, thus validating the stimuli to clinical research in this area. A device in controlled and multipurpose environments may contribute to standardizing research and exploring new theories. Future research may demonstrate the possible benefits of the aquatics care multi-device AuBento to improve the well-being and symptom control in palliative care patients and their families.

Keywords: advanced palliative aquatics care, magnetic field therapy, medical device, research design

Procedia PDF Downloads 113
681 Most Recent Lifespan Estimate for the Itaipu Hydroelectric Power Plant Computed by Using Borland and Miller Method and Mass Balance in Brazil, Paraguay

Authors: Anderson Braga Mendes

Abstract:

Itaipu Hydroelectric Power Plant is settled on the Paraná River, which is a natural boundary between Brazil and Paraguay; thus, the facility is shared by both countries. Itaipu Power Plant is the biggest hydroelectric generator in the world, and provides clean and renewable electrical energy supply for 17% and 76% of Brazil and Paraguay, respectively. The plant started its generation in 1984. It counts on 20 Francis turbines and has installed capacity of 14,000 MWh. Its historic generation record occurred in 2016 (103,098,366 MWh), and since the beginning of its operation until the last day of 2016 the plant has achieved the sum of 2,415,789,823 MWh. The distinct sedimentologic aspects of the drainage area of Itaipu Power Plant, from its stretch upstream (Porto Primavera and Rosana dams) to downstream (Itaipu dam itself), were taken into account in order to best estimate the increase/decrease in the sediment yield by using data from 2001 to 2016. Such data are collected through a network of 14 automatic sedimentometric stations managed by the company itself and operating in an hourly basis, covering an area of around 136,000 km² (92% of the incremental drainage area of the undertaking). Since 1972, a series of lifespan studies for the Itaipu Power Plant have been made, being first assessed by Sir Hans Albert Einstein, at the time of the feasibility studies for the enterprise. From that date onwards, eight further studies were made through the last 44 years aiming to confer more precision upon the estimates based on more updated data sets. From the analysis of each monitoring station, it was clearly noticed strong increase tendencies in the sediment yield through the last 14 years, mainly in the Iguatemi, Ivaí, São Francisco Falso and Carapá Rivers, the latter situated in Paraguay, whereas the others are utterly in Brazilian territory. Five lifespan scenarios considering different sediment yield tendencies were simulated with the aid of the softwares SEDIMENT and DPOSIT, both developed by the author of the present work. Such softwares thoroughly follow the Borland & Miller methodology (empirical method of area-reduction). The soundest scenario out of the five ones under analysis indicated a lifespan foresight of 168 years, being the reservoir only 1.8% silted by the end of 2016, after 32 years of operation. Besides, the mass balance in the reservoir (water inflows minus outflows) between 1986 and 2016 shows that 2% of the whole Itaipu lake is silted nowadays. Owing to the convergence of both results, which were acquired by using different methodologies and independent input data, it is worth concluding that the mathematical modeling is satisfactory and calibrated, thus assigning credibility to this most recent lifespan estimate.

Keywords: Borland and Miller method, hydroelectricity, Itaipu Power Plant, lifespan, mass balance

Procedia PDF Downloads 257
680 Finite Element Analysis of the Drive Shaft and Jacking Frame Interaction in Micro-Tunneling Method: Case Study of Tehran Sewerage

Authors: B. Mohammadi, A. Riazati, P. Soltan Sanjari, S. Azimbeik

Abstract:

The ever-increasing development of civic demands on one hand; and the urban constrains for newly establish of infrastructures, on the other hand, perforce the engineering committees to apply non-conflicting methods in order to optimize the results. One of these optimized procedures to establish the main sewerage networks is the pipe jacking and micro-tunneling method. The raw information and researches are based on the experiments of the slurry micro-tunneling project of the Tehran main sewerage network that it has executed by the KAYSON co. The 4985 meters route of the mentioned project that is located nearby the Azadi square and the most vital arteries of Tehran is faced to 45% physical progress nowadays. The boring machine is made by the Herrenknecht and the diameter of the using concrete-polymer pipes are 1600 and 1800 millimeters. Placing and excavating several shafts on the ground and direct Tunnel boring between the axes of issued shafts is one of the requirements of the micro-tunneling. Considering the stream of the ground located shafts should care the hydraulic circumstances, civic conditions, site geography, traffic cautions and etc. The profile length has to convert to many shortened segment lines so the generated angle between the segments will be based in the manhole centers. Each segment line between two continues drive and receive the shaft, displays the jack location, driving angle and the path straight, thus, the diversity of issued angle causes the variety of jack positioning in the shaft. The jacking frame fixing conditions and it's associated dynamic load direction produces various patterns of Stress and Strain distribution and creating fatigues in the shaft wall and the soil surrounded the shaft. This pattern diversification makes the shaft wall transformed, unbalanced subsidence and alteration in the pipe jacking Stress Contour. This research is based on experiments of the Tehran's west sewerage plan and the numerical analysis the interaction of the soil around the shaft, shaft walls and the Jacking frame direction and finally, the suitable or unsuitable location of the pipe jacking shaft will be determined.

Keywords: underground structure, micro-tunneling, fatigue analysis, dynamic-soil–structure interaction, underground water, finite element analysis

Procedia PDF Downloads 302
679 A Comparative Human Rights Analysis of Deprivation of Citizenship as a Counterterrorism Instrument: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

In response to Islamic-inspired terrorism and the growing trend of foreign fighters, European governments are increasingly relying on the deprivation of citizenship as a security tool. This development fits within a broader securitization of immigration, where the terrorist threat is perceived as emanating from abroad. As a result, immigration law became more and more ‘securitized’. The European migration crisis has reinforced this trend. This research evaluates the deprivation of citizenship from a human rights perspective. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, vitalizing (the debate on) deprivation of citizenship as a counterterrorism tool. Yet, they adopt a very different approach on this: The United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also ‘securitized’ its immigration policy after the recent terrorist hit in Stockholm but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This contribution evaluates the deprivation of citizenship in Belgium. Belgian law has provided the possibility to strip someone of their Belgian citizenship since 1919. However, the provision long remained a dead letter. The 2015 Charlie Hebdo attacks in Paris sparked a series of legislative changes, elevating the deprivation measure to a key security tool in Belgian law. Yet, the measure raises profound human rights issues. Firstly, it infringes the right to private and family life. As provided by Article 8 (2) European Court of Human Right (ECHR), this right can be limited if necessary for national security and public safety. Serious questions can however be raised about the necessity for the national security of depriving an individual of its citizenship. Behavior giving rise to this measure will generally be governed by criminal law. From a security perspective, criminal detention will thus already provide in removing the individual from society. Moreover, simply stripping an individual of its citizenship and deporting them constitutes a failure of criminal law’s responsibility to prosecute criminal behavior. Deprivation of citizenship is also discriminatory, because it differentiates, without a legitimate reason, between those liable to deprivation and those who are not. It thereby installs a secondary class of citizens, violating the European Court of Human Right’s principle that no distinction can be tolerated between children on the basis of the status of their parents. If followed by expulsion, deprivation also seriously jeopardizes the right to life and prohibition of torture. This contribution explores the human rights consequences of citizenship deprivation as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.

Keywords: Belgium, counterterrorism strategies, deprivation of citizenship, human rights, immigration law

Procedia PDF Downloads 105
678 The Usage of Negative Emotive Words in Twitter

Authors: Martina Katalin Szabó, István Üveges

Abstract:

In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.

Keywords: gender differences, negative emotive words, semantic changes over time, twitter

Procedia PDF Downloads 182
677 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 105
676 Food Design as a University-Industry Collaboration Project: An Experience Design on Controlling Chocolate Consumption and Long-Term Eating Behavior

Authors: Büşra Durmaz, Füsun Curaoğlu

Abstract:

While technology-oriented developments in the modern world change our perceptions of time and speed, they also force our food consumption patterns, such as getting pleasure from what we eat and eating slowly. The habit of eating quickly and hastily causes not only the feeling of not understanding the taste of the food eaten but also the inability to postpone the feeling of satiety and, therefore, many health problems. In this context, especially in the last ten years, in the field of industrial design, food manufacturers for healthy living and consumption have been collaborating with industrial designers on food design. The consumers of the new century, who are in an uncontrolled time intensity, receive support from small snacks as a source of happiness and pleasure in the little time intervals they can spare. At this point, especially chocolate has been a source of happiness for its consumers as a source of both happiness and pleasure for hundreds of years. However, when the portions have eaten cannot be controlled, a pleasure food such as chocolate can cause both health problems and many emotional problems, especially the feeling of guilt. Fast food, which is called food that is prepared and consumed quickly, has been increasing rapidly around the world in recent years. This study covers the process and results of a chocolate design based on the user experience of a university-industry cooperation project carried out within the scope of Eskişehir Technical University graduation projects. The aim of the project is a creative product design that will enable the user to experience chocolate consumption with a healthy eating approach. For this, while concepts such as pleasure, satiety, and taste are discussed; A survey with 151 people and semi-structured face-to-face interviews with 7 people during the experience design process within the scope of the user-oriented design approach, mainly literature review, within the scope of main topics such as mouth anatomy, tongue structure, taste, the functions of the eating action in the brain, hormones and chocolate, video A case study based on the research paradigm of Qualitative Research was structured within the scope of different research processes such as analysis and project diaries. As a result of the research, it has been reached that the melting in the mouth is the preferred experience of the users in order to spread the experience of eating chocolate for a long time based on pleasure while eating chocolate with healthy portions. In this context, researches about the production of sketches, mock-ups and prototypes of the product are included in the study. As a result, a product packaging design has been made that supports the active role of the senses such as sight, smell and hearing, where consumption begins, in order to consume chocolate by melting and to actively secrete the most important stimulus salivary glands in order to provide a healthy and long-term pleasure-based consumption.

Keywords: chocolate, eating habit, pleasure, saturation, sense of taste

Procedia PDF Downloads 59
675 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 164
674 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 180
673 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid

Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova

Abstract:

Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.

Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click

Procedia PDF Downloads 109
672 The Social Aspects of Code-Switching in Online Interaction: The Case of Saudi Bilinguals

Authors: Shirin Alabdulqader

Abstract:

This research aims to investigate the concept of code-switching (CS) between English, Arabic, and the CS practices of Saudi online users via a Translanguaging (TL) lens for more inclusive view towards the nature of the data from the study. It employs Digitally Mediated Communication (DMC), specifically the WhatsApp and Twitter platforms, in order to understand how the users employ online resources to communicate with others on a daily basis. This project looks beyond language and considers the multimodal affordances (visual and audio means) that interlocutors utilise in their online communicative practices to shape their online social existence. This exploratory study is based on a data-driven interpretivist epistemology as it aims to understand how meaning (reality) is created by individuals within different contexts. This project used a mixed-method approach, combining a qualitative and a quantitative approach. In the former, data were collected from online chats and interview responses, while in the latter a questionnaire was employed to understand the frequency and relations between the participants’ linguistic and non-linguistic practices and their social behaviours. The participants were eight bilingual Saudi nationals (both men and women, aged between 20 and 50 years old) who interacted with others online. These participants provided their online interactions, participated in an interview and responded to a questionnaire. The study data were gathered from 194 WhatsApp chats and 122 Tweets. These data were analysed and interpreted according to three levels: conversational turn taking and CS; the linguistic description of the data; and CS and persona. This project contributes to the emerging field of analysing online Arabic data systematically, and the field of multimodality and bilingual sociolinguistics. The findings are reported for each of the three levels. For conversational turn taking, the CS analysis revealed that it was used to accomplish negotiation and develop meaning in the conversation. With regard to the linguistic practices of the CS data, the majority of the code-switched words were content morphemes. The third level of data interpretation is CS and its relationship with identity; two types of identity were indexed; absolute identity and contextual identity. This study contributes to the DMC literature and bridges some of the existing gaps. The findings of this study are that CS by its nature, and most of the findings, if not all, support the notion of TL that multiliteracy is one’s ability to decode multimodal communication, and that this multimodality contributes to the meaning. Either this is applicable to the online affordances used by monolinguals or multilinguals and perceived not only by specific generations but also by any online multiliterates, the study provides the linguistic features of CS utilised by Saudi bilinguals and it determines the relationship between these features and the contexts in which they appear.

Keywords: social media, code-switching, translanguaging, online interaction, saudi bilinguals

Procedia PDF Downloads 112
671 Optimization of the Feedstock Supply of an Oilseeds Conversion Unit for Biofuel Production in West Africa: A Comparative Study of the Supply of Jatropha curcas and Balanites aegyptiaca Seeds

Authors: Linda D. F. Bambara, Marie Sawadogo

Abstract:

Jatropha curcas (jatropha) is the plant that has been the most studied for biofuel production in West Africa. There exist however other plants such as Balanites aegyptiaca (balanites) that have been targeted as a potential feedstock for biofuel production. This biomass could be an alternative feedstock for the production of straight vegetable oil (SVO) at costs lower than jatropha-based SVO production costs. This study aims firstly to determine, through an MILP model, the optimal organization that minimizes the costs of the oilseeds supply of two biomass conversion units (BCU) exploiting respectively jatropha seeds and the balanitès seeds. Secondly, the study aims to carry out a comparative study of these costs obtained for each BCU. The model was then implemented on two theoretical cases studies built on the basis of the common practices in Burkina Faso and two scenarios were carried out for each case study. In Scenario 1, 3 pre-processing locations ("at the harvesting area", "at the gathering points", "at the BCU") are possible. In scenario 2, only one location ("at the BCU") is possible. For each biomass, the system studied is the upstream supply chain (harvesting, transport and pre-processing (drying, dehulling, depulping)), including cultivation (for jatropha). The model optimizes the area of land to be exploited based on the productivity of the studied plants and material losses that may occur during the harvesting and the supply of the BCU. It then defines the configuration of the logistics network allowing an optimal supply of the BCU taking into account the most common means of transport in West African rural areas. For the two scenarios, the results of the implementation showed that the total area exploited for balanites (1807 ha) is 4.7 times greater than the total area exploited for Jatropha (381 ha). In both case studies, the location of pre-processing “at the harvesting area” was always chosen for scenario1. As the balanites trees were not planted and because the first harvest of the jatropha seeds took place 4 years after planting, the cost price of the seeds at the BCU without the pre-processing costs was about 430 XOF/kg. This cost is 3 times higher than the balanites's one, which is 140 XOF/kg. After the first year of harvest, i.e. 5 years after planting, and assuming that the yield remains constant, the same cost price is about 200 XOF/kg for Jatropha. This cost is still 1.4 times greater than the balanites's one. The transport cost of the balanites seeds is about 120 XOF/kg. This cost is similar for the jatropha seeds. However, when the pre-processing is located at the BCU, i.e. for scenario2, the transport costs of the balanites seeds is 1200 XOF/kg. These costs are 6 times greater than the transport costs of jatropha which is 200 XOF/kg. These results show that the cost price of the balanites seeds at the BCU can be competitive compared to the jatropha's one if the pre-processing is located at the harvesting area.

Keywords: Balanites aegyptiaca, biomass conversion, Jatropha curcas, optimization, post-harvest operations

Procedia PDF Downloads 316
670 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 604
669 Blood Microbiome in Different Metabolic Types of Obesity

Authors: Irina M. Kolesnikova, Andrey M. Gaponov, Sergey A. Roumiantsev, Tatiana V. Grigoryeva, Dilyara R. Khusnutdinova, Dilyara R. Kamaldinova, Alexander V. Shestopalov

Abstract:

Background. Obese patients have unequal risks of metabolic disorders. It is accepted to distinguish between metabolically healthy obesity (MHO) and metabolically unhealthy obesity (MUHO). MUHO patients have a high risk of metabolic disorders, insulin resistance, and diabetes mellitus. Among the other things, the gut microbiota also contributes to the development of metabolic disorders in obesity. Obesity is accompanied by significant changes in the gut microbial community. In turn, bacterial translocation from the intestine is the basis for the blood microbiome formation. The aim was to study the features of the blood microbiome in patients with various metabolic types of obesity. Patients, materials, methods. The study included 116 healthy donors and 101 obese patients. Depending on the metabolic type of obesity, the obese patients were divided into subgroups with MHO (n=36) and MUHO (n=53). Quantitative and qualitative assessment of the blood microbiome was based on metagenomic analysis. Blood samples were used to isolate DNA and perform sequencing of the variable v3-v4 region of the 16S rRNA gene. Alpha diversity indices (Simpson index, Shannon index, Chao1 index, phylogenetic diversity, the number of observed operational taxonomic units) were calculated. Moreover, we compared taxa (phyla, classes, orders, and families) in terms of isolation frequency and the taxon share in the total bacterial DNA pool between different patient groups. Results. In patients with MHO, the characteristics of the alpha-diversity of the blood microbiome were like those of healthy donors. However, MUHO was associated with an increase in all diversity indices. The main phyla of the blood microbiome were Bacteroidetes, Firmicutes, Proteobacteria, and Actinobacteria. Cyanobacteria, TM7, Thermi, Verrucomicrobia, Chloroflexi, Acidobacteria, Planctomycetes, Gemmatimonadetes, and Tenericutes were found to be less significant phyla of the blood microbiome. Phyla Acidobacteria, TM7, and Verrucomicrobia were more often isolated in blood samples of patients with MUHO compared with healthy donors. Obese patients had a decrease in some taxonomic ranks (Bacilli, Caulobacteraceae, Barnesiellaceae, Rikenellaceae, Williamsiaceae). These changes appear to be related to the increased diversity of the blood microbiome observed in obesity. An increase of Lachnospiraceae, Succinivibrionaceae, Prevotellaceae, and S24-7 was noted for MUHO patients, which, apparently, is explained by a magnification in intestinal permeability. Conclusion. Blood microbiome differs in obese patients and healthy donors at class, order, and family levels. Moreover, the nature of the changes is determined by the metabolic type of obesity. MUHO linked to increased diversity of the blood microbiome. This appears to be due to increased microbial translocation from the intestine and non-intestinal sources.

Keywords: blood microbiome, blood bacterial DNA, obesity, metabolically healthy obesity, metabolically unhealthy obesity

Procedia PDF Downloads 145
668 Envy and Schadenfreude Domains in a Model of Neurodegeneration

Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez

Abstract:

The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.

Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia

Procedia PDF Downloads 261
667 The Effect of Manure Loaded Biochar on Soil Microbial Communities

Authors: T. Weber, D. MacKenzie

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.

Keywords: cattle, biochar, manure, microbial activity

Procedia PDF Downloads 85
666 To Live on the Margins: A Closer Look at the Social and Economic Situation of Illegal Afghan Migrants in Iran

Authors: Abdullah Mohammadi

Abstract:

Years of prolong war in Afghanistan has led to one of the largest refugee and migrant populations in the contemporary world. During this continuous unrest which began in 1970s (by military coup, Marxist revolution and the subsequent invasion of USSR), over one-third of the population migrated to neighboring countries, especially Pakistan and Iran. After the Soviet Army withdrawal in 1989, a new wave of conflicts emerged between rival Afghan groups and this led to new refugees. Taliban period, also, created its own refugees. During all these years, I.R. of Iran has been one of the main destinations of Afghan refugees and migrants. At first, due to the political situation after Islamic Revolution, Iran government didn’t restrict the entry of Afghan refugees. Those who came first in Iran received ID cards and had access to education and healthcare services. But in 1990s, due to economic and social concerns, Iran’s policy towards Afghan refugees and migrants changed. The government has tried to identify and register Afghans in Iran and limit their access to some services and jobs. Unfortunately, there are few studies on Afghan refugees and migrants’ situation in Iran and we have a dim and vague picture of them. Of the few studies done on this group, none of them focus on the illegal Afghan migrants’ situation in Iran. Here, we tried to study the social and economic aspects of illegal Afghan migrants’ living in Iran. In doing so, we interviewed 24 illegal Afghan migrants in Iran. The method applied for analyzing the data is thematic analysis. For the interviews, we chose family heads (17 men and 7 women). According to the findings, illegal Afghan migrants’ socio-economic situation in Iran is very undesirable. Its main cause is the marginalization of this group which is resulted from government policies towards Afghan migrants. Most of the illegal Afghan migrants work in unskilled and inferior jobs and live in rent houses on the margins of cities and villages. None of them could buy a house or vehicle due to law. Based on their income, they form one of the lowest, unprivileged groups in the society. Socially, they face many problems in their everyday life: social insecurity, harassment and violence, misuse of their situation by police and people, lack of education opportunity, etc. In general, we may conclude that illegal Afghan migrant have little adaptation with Iran’s society. They face severe limitations compared to legal migrants and refugees and have no opportunity for upward social mobility. However, they have managed some strategies to face these difficulties including: seeking financial and emotional helps from family and friendship networks, sending one of the family members to third country (mostly to European countries), establishing self-administered schools for children (schools which are illegal and run by Afghan educated youth).

Keywords: illegal Afghan migrants, marginalization, social insecurity, upward social mobility

Procedia PDF Downloads 299
665 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies

Authors: Hussain Bohra, Pradyuman Sharma

Abstract:

This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.

Keywords: balance sheet, corporate performance, current ratio, panel data method

Procedia PDF Downloads 246
664 Ethical Artificial Intelligence: An Exploratory Study of Guidelines

Authors: Ahmad Haidar

Abstract:

The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.

Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI

Procedia PDF Downloads 74