Search results for: deformable part model
842 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes
Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen
Abstract:
Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure
Procedia PDF Downloads 144841 Application of Thermoplastic Microbioreactor to the Single Cell Study of Budding Yeast to Decipher the Effect of 5-Hydroxymethylfurfural on Growth
Authors: Elif Gencturk, Ekin Yurdakul, Ahmet Y. Celik, Senol Mutlu, Kutlu O. Ulgen
Abstract:
Yeast cells are generally used as a model system of eukaryotes due to their complex genetic structure, rapid growth ability in optimum conditions, easy replication and well-defined genetic system properties. Thus, yeast cells increased the knowledge of the principal pathways in humans. During fermentation, carbohydrates (hexoses and pentoses) degrade into some toxic by-products such as 5-hydroxymethylfurfural (5-HMF or HMF) and furfural. HMF influences the ethanol yield, and ethanol productivity; it interferes with microbial growth and is considered as a potent inhibitor of bioethanol production. In this study, yeast single cell behavior under HMF application was monitored by using a continuous flow single phase microfluidic platform. Microfluidic device in operation is fabricated by hot embossing and thermo-compression techniques from cyclo-olefin polymer (COP). COP is biocompatible, transparent and rigid material and it is suitable for observing fluorescence of cells considering its low auto-fluorescence characteristic. The response of yeast cells was recorded through Red Fluorescent Protein (RFP) tagged Nop56 gene product, which is an essential evolutionary-conserved nucleolar protein, and also a member of the box C/D snoRNP complexes. With the application of HMF, yeast cell proliferation continued but HMF slowed down the cell growth, and after HMF treatment the cell proliferation stopped. By the addition of fresh nutrient medium, the yeast cells recovered after 6 hours of HMF exposure. Thus, HMF application suppresses normal functioning of cell cycle but it does not cause cells to die. The monitoring of Nop56 expression phases of the individual cells shed light on the protein and ribosome synthesis cycles along with their link to growth. Further computational study revealed that the mechanisms underlying the inhibitory or inductive effects of HMF on growth are enriched in functional categories of protein degradation, protein processing, DNA repair and multidrug resistance. The present microfluidic device can successfully be used for studying the effects of inhibitory agents on growth by single cell tracking, thus capturing cell to cell variations. By metabolic engineering techniques, engineered strains can be developed, and the metabolic network of the microorganism can thus be manipulated such that chemical overproduction of target metabolite is achieved along with the maximum growth/biomass yield.Keywords: COP, HMF, ribosome biogenesis, thermoplastic microbioreactor, yeast
Procedia PDF Downloads 171840 Unionisation, Participation and Democracy: Forms of Convergence and Divergence between Union Membership and Civil and Political Activism in European Countries
Authors: Silvia Lucciarini, Antonio Corasaniti
Abstract:
The issue of democracy in capitalist countries has once again become the focus of debate in recent years. A number of socio-economic and political tensions have triggered discussion of this topic from various perspectives and disciplines. Political developments, the rise of both right-wing parties and populism and the constant growth of inequalities in a context of welfare downsizing, have led scholars to question if European capitalist countries are really capable of creating and redistributing resources and look for elements that might make democratic capital in European countries more dense. The aim of the work is to shed light on the trajectories, intensity and convergence or divergence between political and associative participation, on one hand, and organization, on the other, as these constitute two of the main points of connection between the norms, values and actions that bind citizens to the state. Using the European Social Survey database, some studies have sought to analyse degrees of unionization by investigating the relationship between systems of industrial relations and vulnerable groups (in terms of value-oriented practices or political participation). This paper instead aims to investigate the relationship between union participation and civil/political participation, comparing union members and non-members and then distinguishing between employees and self-employed professionals to better understand participatory behaviors among different workers. The first component of the research will employ a multilinear logistic model to examine a sample of 10 countries selected according to a grid that combines the industrial relations models identified by Visser (2006) and the Welfare State systems identified by Esping-Andersen (1990). On the basis of this sample, we propose to compare the choices made by workers and their propensity to join trade unions, together with their level of social and political participation, from 2002 to 2016. In the second component, we aim to verify whether workers within the same system of industrial relations and welfare show a similar propensity to engage in civil participation through political bodies and associations, or if instead these tendencies take on more specific and varied forms. The results will allow us to see: (1) if political participation is higher among unionized workers than it is among the non-unionized. (2) what are the differences in unionisation and civil/political participation between self-employed, temporary and full-time employees and (3) whether the trajectories within industrial relations and welfare models display greater inclusiveness and participation, thereby confirming or disproving the patterns that have been documented among the different European countries.Keywords: union membership, participation, democracy, industrial relations, welfare systems
Procedia PDF Downloads 142839 Adjustment of the Level of Vibrational Force on Targeted Teeth
Authors: Amin Akbari, Dongcai Wang, Huiru Li, Xiaoping Du, Jie Chen
Abstract:
The effect of vibrational force (VF) on accelerating orthodontic tooth movement depends on the level of delivered stimulation to the tooth in terms of peak load (PL), which requires contacts between the tooth and the VF device. A personalized device ensures the contacts, but the resulting PL distribution on the teeth is unknown. Furthermore, it is unclear whether the PL on particular teeth can be adjusted to the prescribed values. The objective of this study was to investigate the efficacy of apersonalized VF device in controlling the level of stimulation on two teeth, the mandibular canines and 2nd molars. A 3-D finite element (FE) model of human dentition, including teeth, PDL, and alveolar bone, was created from the cone beam computed tomography images of an anonymous subject. The VF was applied to the teeth through a VFdevice consisting of a mouthpiece with engraved tooth profile of the subject and a VF source that applied 0.3 N force with the frequency of 30 Hz. The dentition and mouthpiece were meshed using 10-node tetrahedral elements. Interface elements were created at the interfaces between the teeth and the mouthpiece. The upper and lower teeth bite on the mouthpiece to receive the vibration. The depth of engraved individual tooth profile could be adjusted, which was accomplished by adding a layer of material as an interference or removing a layer of material as a clearance to change the PL on the tooth. The interference increases the PL while the clearance decreases it. Fivemouthpiece design cases were simulated, which included a mouthpiece without interference/clearance; the mouthpieces with bilateral interferences on both mandibular canines and 2nd molars with magnitudes of 0.1, 0.15, and 0.2-mm, respectively; and mouthpiece with bilateral 0.3-mm clearances on the four teeth. Then, the force distributions on the entire dentition were compared corresponding to these adjustments. The PL distribution on the teeth is uneven when there is no interference or clearance. Among all teeth, the anterior segment receives the highest level of PL. Adding 0.1, 0.15, and 0.2-mm interferences to the canines and 2nd molars bilaterally leads to increase of the PL on the canines by 10, 62, and 73 percent and on the 2nd molar by 14, 55, and 87 percent, respectively. Adding clearances to the canines and 2nd molars by removing the contactsbetween these teeth and the mouthpiece results in zero PL on them. Moreover, introducing interference to mandibular canines and 2nd molarsredistributes the PL on the entireteeth. The share of the PL on the anterior teeth are reduced. The use of the personalized mouthpiece ensures contactsof the teeth to the mouthpiece so that all teeth can be stimulated. However, the PL distribution is uneven. Adding interference between a tooth and the mouthpiece increases the PL while introducing clearance decreases the PL. As a result, the PL is redistributed. This study confirms that the level of VF stimulation on the individual tooth can be adjusted to a prescribed value.Keywords: finite element method, orthodontic treatment, stress analysis, tooth movement, vibrational force
Procedia PDF Downloads 224838 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education
Authors: Maria Antonietta Marongiu
Abstract:
Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric
Procedia PDF Downloads 129837 Applying Push Notifications with Behavioral Change Strategies in Fitness Applications: A Survey of User's Perception Based on Consumer Engagement
Authors: Yali Liu, Maria Avello Iturriagagoitia
Abstract:
Background: Fitness applications (apps) are one of the most popular mobile health (mHealth) apps. These apps can help prevent/control health issues such as obesity, which is one of the most serious public health challenges in the developed world in recent decades. Compared with the traditional intervention like face-to-face treatment, it is cheaper and more convenient to use fitness apps to interfere with physical activities and healthy behaviors. Nevertheless, fitness applications apps tend to have high abandonment rates and low levels of user engagement. Therefore, maintaining the endurance of users' usage is challenging. In fact, previous research shows a variety of strategies -goal-setting, self-monitoring, coaching, etc.- for promoting fitness and health behavior change. These strategies can influence the users’ perseverance and self-monitoring of the program as well as favoring their adherence to routines that involve a long-term behavioral change. However, commercial fitness apps rarely incorporate these strategies into their design, thus leading to a lack of engagement with the apps. Most of today’s mobile services and brands engage their users proactively via push notifications. Push notifications. These notifications are visual or auditory alerts to inform mobile users about a wide range of topics that entails an effective and personal mean of communication between the app and the user. One of the research purposes of this article is to implement the application of behavior change strategies through push notifications. Proposes: This study aims to better understand the influence that effective use of push notifications combined with the behavioral change strategies will have on users’ engagement with the fitness app. And the secondary objectives are 1) to discuss the sociodemographic differences in utilization of push notifications of fitness apps; 2) to determine the impact of each strategy in customer engagement. Methods: The study uses a combination of the Consumer Engagement Theory and UTAUT2 based model to conduct an online survey among current users of fitness apps. The questionnaire assessed attitudes to each behavioral change strategy, and sociodemographic variables. Findings: Results show the positive effect of push notifications in the generation of consumer engagement and the different impacts of each strategy among different groups of population in customer engagement. Conclusions: Fitness apps with behavior change strategies have a positive impact on increasing users’ usage time and customer engagement. Theoretical experts can participate in designing fitness applications, along with technical designers.Keywords: behavioral change, customer engagement, fitness app, push notification, UTAUT2
Procedia PDF Downloads 136836 Online Course of Study and Job Crafting for University Students: Development Work and Feedback
Authors: Hannele Kuusisto, Paivi Makila, Ursula Hyrkkanen
Abstract:
Introduction: There have been arguments about the skills university students should have when graduated. Current trends argue that as well as the specific job-related skills the graduated students need problem-solving, interaction and networking skills as well as self-management skills. Skills required in working life are also considered in the Finnish national project called VALTE (short for 'prepared for working life'). The project involves 11 Finnish school organizations. As one result of this project, a five-credit independent online course in study and job engagement as well as in study and job crafting was developed at Turku University of Applied Sciences. The aim of the oral or e-poster presentation is to present the online course developed in the project. The purpose of this abstract is to present the development work of the online course and the feedback received from the pilots. Method: As the University of Turku is the leading partner of the VALTE project, the collaborative education platform ViLLE (https://ville.utu.fi, developed by the University of Turku) was chosen as the online platform for the course. Various exercise types with automatic assessment were used; for example, quizzes, multiple-choice questions, classification exercises, gap filling exercises, model answer questions, self-assessment tasks, case tasks, and collaboration in Padlet. In addition, the free material and free platforms on the Internet were used (Youtube, Padlet, Todaysmeet, and Prezi) as well as the net-based questionnaires about the study engagement and study crafting (made with Webropol). Three teachers with long teaching experience (also with job crafting and online pedagogy) and three students working as trainees in the project developed the content of the course. The online course was piloted twice in 2017 as an elective course for the students at Turku University of Applied Sciences, a higher education institution of about 10 000 students. After both pilots, feedback from the students was gathered and the online course was developed. Results: As the result, the functional five-credit independent online course suitable for students of different educational institutions was developed. The student feedback shows that students themselves think that the developed online course really enhanced their job and study crafting skills. After the course, 91% of the students considered their knowledge in job and study engagement as well as in job and study crafting to be at a good or excellent level. About two-thirds of the students were going to exploit their knowledge significantly in the future. Students appreciated the variability and the game-like feeling of the exercises as well as the opportunity to study online at the time and place they chose themselves. On a five-point scale (1 being poor and 5 being excellent), the students graded the clarity of the ViLLE platform as 4.2, the functionality of the platform as 4.0 and the easiness of operating as 3.9.Keywords: job crafting, job engagement, online course, study crafting, study engagement
Procedia PDF Downloads 153835 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 216834 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 248833 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration
Authors: Cleyton S. Stampa
Abstract:
This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.Keywords: flat slab, heat storing, pure metal, solid subcooling
Procedia PDF Downloads 141832 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 402831 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU
Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman
Abstract:
Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.Keywords: care, ethics, expertise, NICU, paternalism
Procedia PDF Downloads 146830 New Recipes of Communication in the New Linguistic World Order: End of Road for Aged Pragmatics
Authors: Shailendra Kumar Singh
Abstract:
With the rise of New Linguistic World Order in the 21st century, the Aged Pragmatics is palpitating on the edge of theoretical irrelevance. What appears to be a new sociolinguistic reality is that the enlightening combination of alternative west, inclusive globalization and techno-revolution is adding novel recipes to communicative actions, style and gain among new linguistic breed which is being neither dominated nor powered by the western supremacy. The paper has the following main, interrelated, aims: it is intended to introduce the concept of alternative pragmatics that can offer what exactly is needed for our emerging societal realities; it asserts as to how the basic pillar of linguistic success in the new linguistic world order rests upon linguistic temptation and calibration of all; and it also reviews an inevitability of emerging economies in shaping the communication trends at a time when the western world is struggling to maintain the same control on the others exercised in the past. In particular, the paper seeks answers for the following questions: (a) Do we need an alternative pragmatics, one with alternativist leaning in an era of inclusive globalization and alternative west? (b) What are the pulses of shift which are encapsulating emergence of new communicative behavior among the new linguistic breed by breaking yesterday’s linguistic rigidity? (c) Or, what are those shifts which are making linguistic shift more perceptible? (d) Is New Linguistic World Order succeeding in reversing linguistic priorities of `who speaks, what language, where, how, why, to whom and in which condition’ with no parallel in the history? (e) What is explicit about the contemporary world of 21st century which makes linguistic world all exciting and widely celebrative phenomenon and that is also forced into our vision? (f) What factors will hold key to the future of yesterday’s `influential languages’ and today’s `emerging languages’ as world is in the paradigm transition? (g) Is the collapse of Aged Pragmatics good for the 21st century for understanding the difference between pragmatism of old linguistic world and new linguistic world order? New Linguistic world Order today, unlike in the past, is about a branding of new world with liberal world view for a particular form of ideal to be imagined in the 21st century. At this time without question it is hope that a new set of ideals with popular vocabulary will become the implicit pragmatic model as one of benign majoritarianism in all aspects of sociolinguistic reality. It appears to be a reality that we live in an extraordinary linguistic world with no parallel in the past. In particular, the paper also highlights the paradigm shifts: Demographic, Social-psychological, technological and power. These shifts are impacting linguistic shift which is unique in itself. The paper will highlight linguistic shift in details in which alternative west plays a major role without challenging the west because it is an era of inclusive globalization in which almost everyone takes equal responsibility.Keywords: inclusive globalization, new linguistic world order, linguistic shift, world order
Procedia PDF Downloads 343829 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'
Authors: Arianna Vedaschi
Abstract:
This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.Keywords: radicalization, free speech, international terrorism, national security
Procedia PDF Downloads 199828 Magnitude of Transactional Sex and Its Determinant Factors Among Women in Sub-Saharan Africa: Systematic Review and Meat Analysis
Authors: Gedefaye Nibret Mihretie
Abstract:
Background: Transactional sex is casual sex between two people to receive material incentives in exchange for sexual favors. Transactional sex is associated with negative consequences, which increase the risk of sexually transmitted diseases, including HIV/AIDS, unintended pregnancy, unsafe abortion, and physiological trauma. Many primary studies in Sub-Saharan Africa have been conducted to assess the prevalence and associated factors of transactional sex among women. These studies had great discrepancies and inconsistent results. Hence, this systematic review and meta-analysis aimed to synthesize the pooled prevalence of the practice of transactional sex among women and its associated factors in Sub-Saharan Africa. Method: Cross-sectional studies were systematically searched from March 6, 2022, to April 24, 2022, using PubMed, Google Scholar, HINARI, Cochrane Library, and grey literature. The pooled prevalence of transactional sex and associated factors was estimated using DerSemonial-Laird Random Effect Model. Stata (version 16.0) was used to analyze the data. The I-squared statistic was used to assess the studies' heterogeneity. A funnel plot and Egger's test were used to check for publication bias. A subgroup analysis was performed to minimize the underline heterogeneity depending on the study years, source of data, sample sizes and geographical location. Results: Four thousand one hundred thirty articles were extracted from various databases. The final thirty-two studies were included in this systematic review, including 108,075 participants. The pooled prevalence of transactional sex among women in Sub-Saharan Africa was 12.55%, with a confidence interval of 9.59% to 15.52%. Educational status (OR = .48, 95%CI, 0.27, 0.69) was the protective factors of transactional sex whereas, alcohol use (OR = 1.85, 95% CI: 1.19, 2.52), early sex debut (OR = 2.57, 95%CI, 1.17, 3.98), substance abuse (OR = 4.21, 95% CI: 2.05, 6.37), having history of sexual experience abuse (OR = 4.08, 95% CI: 1.38, 6.78), physical violence abuse (OR = 6.59, 95% CI: 1.17, 12.02), and sexual violence abuse (OR = 3.56, 95% CI: 1.15, 8.27) were the risk factors of transactional sex. Conclusion: The prevalence of transactional sex among women in Sub-Saharan Africa was high. Educational status, alcohol use, substance abuse, early sex debut, having a history of sexual experiences, physical violence, and sexual violence were predictors of transaction sex. Governmental and other stakeholders are designed to reduce alcohol utilization, provide health information about the negative consequences of early sex debut, substance abuse, and reduce sexual violence, ensuring gender equality through mass media, which should be included in state policy.Keywords: women’s health, child health, reproductive health, midwifery
Procedia PDF Downloads 95827 Hydration of Three-Piece K Peptide Fragments Studied by Means of Fourier Transform Infrared Spectroscopy
Authors: Marcin Stasiulewicz, Sebastian Filipkowski, Aneta Panuszko
Abstract:
Background: The hallmark of neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, is an aggregation of the abnormal forms of peptides and proteins. Water is essential to functioning biomolecules, and it is one of the key factors influencing protein folding and misfolding. However, the hydration studies of proteins are complicated due to the complexity of protein systems. The use of model compounds can facilitate the interpretation of results involving larger systems. Objectives: The goal of the research was to characterize the properties of the hydration water surrounding the two three-residue K peptide fragments INS (Isoleucine - Asparagine - Serine) and NSR (Asparagine - Serine - Arginine). Methods: Fourier-transform infrared spectra of aqueous solutions of the tripeptides were recorded on Nicolet 8700 spectrometer (Thermo Electron Co.) Measurements were carried out at 25°C for varying molality of solute. To remove oscillation couplings from water spectra and, consequently, obtain narrow O-D semi-heavy water bands (HDO), the isotopic dilution method of HDO in H₂O was used. The difference spectra method allowed us to isolate the tripeptide-affected HDO spectrum. Results: The structural and energetic properties of water affected by the tripeptides were compared to the properties of pure water. The shift of the values of the gravity center of bands (related to the mean energy of water hydrogen bonds) towards lower values with respect to the ones corresponding to pure water suggests that the energy of hydrogen bonds between water molecules surrounding tripeptides is higher than in pure water. A comparison of the values of the mean oxygen-oxygen distances in water affected by tripeptides and pure water indicates that water-water hydrogen bonds are shorter in the presence of these tripeptides. The analysis of differences in oxygen-oxygen distance distributions between the tripeptide-affected water and pure water indicates that around the tripeptides, the contribution of water molecules with the mean energy of hydrogen bonds decreases, and simultaneously the contribution of strong hydrogen bonds increases. Conclusions: It was found that hydrogen bonds between water molecules in the hydration sphere of tripeptides are shorter and stronger than in pure water. It means that in the presence of the tested tripeptides, the structure of water is strengthened compared to pure water. Moreover, it has been shown that in the vicinity of the Asparagine - Serine - Arginine, water forms stronger and shorter hydrogen bonds. Acknowledgments: This work was funded by the National Science Centre, Poland (grant 2017/26/D/NZ1/00497).Keywords: amyloids, K-peptide, hydration, FTIR spectroscopy
Procedia PDF Downloads 178826 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials
Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov
Abstract:
Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.Keywords: reading, commercials, eye movements, EEG, polygraphic indicators
Procedia PDF Downloads 166825 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 134824 Computerized Scoring System: A Stethoscope to Understand Consumer's Emotion through His or Her Feedback
Authors: Chen Yang, Jun Hu, Ping Li, Lili Xue
Abstract:
Most companies pay careful attention to consumer feedback collection, so it is popular to find the ‘feedback’ button of all kinds of mobile apps. Yet it is much more changeling to analyze these feedback texts and to catch the true feelings of a consumer regarding either a problem or a complimentary of consumers who hands out the feedback. Especially to the Chinese content, it is possible that; in one context the Chinese feedback expresses positive feedback, but in the other context, the same Chinese feedback may be a negative one. For example, in Chinese, the feedback 'operating with loudness' works well with both refrigerator and stereo system. Apparently, this feedback towards a refrigerator shows negative feedback; however, the same feedback is positive towards a stereo system. By introducing Bradley, M. and Lang, P.'s Affective Norms for English Text (ANET) theory and Bucci W.’s Referential Activity (RA) theory, we, usability researchers at Pingan, are able to decipher the feedback and to find the hidden feelings behind the content. We subtract 2 disciplines ‘valence’ and ‘dominance’ out of 3 of ANET and 2 disciplines ‘concreteness’ and ‘specificity’ out of 4 of RA to organize our own rating system with a scale of 1 to 5 points. This rating system enables us to judge the feelings/emotion behind each feedback, and it works well with both single word/phrase and a whole paragraph. The result of the rating reflects the strength of the feeling/emotion of the consumer when he/she is typing the feedback. In our daily work, we first require a consumer to answer the net promoter score (NPS) before writing the feedback, so we can determine the feedback is positive or negative. Secondly, we code the feedback content according to company problematic list, which contains 200 problematic items. In this way, we are able to collect the data that how many feedbacks left by the consumer belong to one typical problem. Thirdly, we rate each feedback based on the rating system mentioned above to illustrate the strength of the feeling/emotion when our consumer writes the feedback. In this way, we actually obtain two kinds of data 1) the portion, which means how many feedbacks are ascribed into one problematic item and 2) the severity, how strong the negative feeling/emotion is when the consumer is writing this feedback. By crossing these two, and introducing the portion into X-axis and severity into Y-axis, we are able to find which typical problem gets the high score in both portion and severity. The higher the score of a problem has, the more urgent a problem is supposed to be solved as it means more people write stronger negative feelings in feedbacks regarding this problem. Moreover, by introducing hidden Markov model to program our rating system, we are able to computerize the scoring system and are able to process thousands of feedback in a short period of time, which is efficient and accurate enough for the industrial purpose.Keywords: computerized scoring system, feeling/emotion of consumer feedback, referential activity, text mining
Procedia PDF Downloads 177823 Developing Gifted Students’ STEM Career Interest
Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han
Abstract:
To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.Keywords: gifted students, STEM career, STEM education, STEM professionals
Procedia PDF Downloads 76822 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 304821 Clubhouse: A Minor Rebellion against the Algorithmic Tyranny of the Majority
Authors: Vahid Asadzadeh, Amin Ataee
Abstract:
Since the advent of social media, there has been a wave of optimism among researchers and civic activists about the influence of virtual networks on the democratization process, which has gradually waned. One of the lesser-known concerns is how to increase the possibility of hearing the voices of different minorities. According to the theory of media logic, the media, using their technological capabilities, act as a structure through which events and ideas are interpreted. Social media, through the use of the learning machine and the use of algorithms, has formed a kind of structure in which the voices of minorities and less popular topics are lost among the commotion of the trends. In fact, the recommended systems and algorithms used in social media are designed to help promote trends and make popular content more popular, and content that belongs to minorities is constantly marginalized. As social networks gradually play a more active role in politics, the possibility of freely participating in the reproduction and reinterpretation of structures in general and political structures in particular (as Laclau and Mouffe had in mind) can be considered as criteria to democracy in action. The point is that the media logic of virtual networks is shaped by the rule and even the tyranny of the majority, and this logic does not make it possible to design a self-foundation and self-revolutionary model of democracy. In other words, today's social networks, though seemingly full of variety But they are governed by the logic of homogeneity, and they do not have the possibility of multiplicity as is the case in immanent radical democracies (influenced by Gilles Deleuze). However, with the emergence and increasing popularity of Clubhouse as a new social media, there seems to be a shift in the social media space, and that is the diminishing role of algorithms and systems reconditioners as content delivery interfaces. This has led to the fact that in the Clubhouse, the voices of minorities are better heard, and the diversity of political tendencies manifests itself better. The purpose of this article is to show, first, how social networks serve the elimination of minorities in general, and second, to argue that the media logic of social networks must adapt to new interpretations of democracy that give more space to minorities and human rights. Finally, this article will show how the Clubhouse serves the new interpretations of democracy at least in a minimal way. To achieve the mentioned goals, in this article by a descriptive-analytical method, first, the relation between media logic and postmodern democracy will be inquired. The political economy popularity in social media and its conflict with democracy will be discussed. Finally, it will be explored how the Clubhouse provides a new horizon for the concepts embodied in radical democracy, a horizon that more effectively serves the rights of minorities and human rights in general.Keywords: algorithmic tyranny, Clubhouse, minority rights, radical democracy, social media
Procedia PDF Downloads 147820 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel
Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego
Abstract:
Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments
Procedia PDF Downloads 232819 Flow Links Curiosity and Creativity: The Mediating Role of Flow
Authors: Nicola S. Schutte, John M. Malouff
Abstract:
Introduction: Curiosity is a positive emotion and motivational state that consists of the desire to know. Curiosity consists of several related dimensions, including a desire for exploration, deprivation sensitivity, and stress tolerance. Creativity involves generating novel and valuable ideas or products. How curiosity may prompt greater creativity remains to be investigated. The phenomena of flow may link curiosity and creativity. Flow is characterized by intense concentration and absorption and gives rise to optimal performance. Objective of Study: The objective of the present study was to investigate whether the phenomenon of flow may link curiosity with creativity. Methods and Design: Fifty-seven individuals from Australia (45 women and 12 men, mean age of 35.33, SD=9.4) participated. Participants were asked to design a program encouraging residents in a local community to conserve water and to record the elements of their program in writing. Participants were then asked to rate their experience as they developed and wrote about their program. Participants rated their experience on the Dimensional Curiosity Measure sub-scales assessing the exploration, deprivation sensitivity, and stress tolerance facets of curiosity, and the Flow Short Scale. Reliability of the measures as assessed by Cronbach's alpha was as follows: Exploration Curiosity =.92, Deprivation Sensitivity Curiosity =.66, Stress Tolerance Curiosity =.93, and Flow=.96. Two raters independently coded each participant’s water conservation program description on creativity. The mixed-model intraclass correlation coefficient for the two sets of ratings was .73. The mean of the two ratings produced the final creativity score for each participant. Results: During the experience of designing the program, all three types of curiosity were significantly associated with the flow. Pearson r correlations were as follows: Exploration Curiosity and flow, r =.68 (higher Exploration Curiosity was associated with more flow); Deprivation Sensitivity Curiosity and flow, r =.39 (higher Deprivation Sensitivity Curiosity was associated with more flow); and Stress Tolerance Curiosity and flow, r = .44 (more stress tolerance in relation to novelty and exploration was associated with more flow). Greater experience of flow was significantly associated with greater creativity in designing the water conservation program, r =.39. The associations between dimensions of curiosity and creativity did not reach significance. Even though the direct relationships between dimensions of curiosity and creativity were not significant, indirect relationships through the mediating effect of the experience of flow between dimensions of curiosity and creativity were significant. Mediation analysis using PROCESS showed that flow linked Exploration Curiosity with creativity, standardized beta=.23, 95%CI [.02,.25] for the indirect effect; Deprivation Sensitivity Curiosity with creativity, standardized beta=.14, 95%CI [.04,.29] for the indirect effect; and Stress Tolerance Curiosity with creativity, standardized beta=.13, 95%CI [.02,.27] for the indirect effect. Conclusions: When engaging in an activity, higher levels of curiosity are associated with greater flow. More flow is associated with higher levels of creativity. Programs intended to increase flow or creativity might build on these findings and also explore causal relationships.Keywords: creativity, curiosity, flow, motivation
Procedia PDF Downloads 184818 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 92817 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico
Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba
Abstract:
In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems
Procedia PDF Downloads 134816 A New Measurement for Assessing Constructivist Learning Features in Higher Education: Lifelong Learning in Applied Fields (LLAF) Tempus Project
Authors: Dorit Alt, Nirit Raichel
Abstract:
Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities.This form of instruction is criticized for encouraging students to acquire inert knowledge that can be used in instructional settings at best, however cannot be transferred into real-life complex problem settings. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium (including 16 members from 8 countries) is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies and assessment tools, based on the constructivist approach for learning that put a premium on adaptability to the emerging requirements of present society. This presentation will be limited to teachers' education only and to the contribution of the project in providing a scale designed to measure the extent to which the constructivist activities are efficiently applied in the learning environment. A mix-method approach was implemented in two phases to construct the scale: The first phase included a qualitative content analysis involving both deductive and inductive category applications of students' observations. The results foregrounded eight categories: knowledge construction, authenticity, multiple perspectives, prior knowledge, in-depth learning, teacher- student interaction, social interaction and cooperative dialogue. The students' descriptions of their classes were formulated as 36 items. The second phase employed structural equation modeling (SEM). The scale was submitted to 597 undergraduate students. The goodness of fit of the data to the structural model yielded sufficient fit results. This research elaborates the body of literature by adding a category of in-depth learning which emerged from the content analysis. Moreover, the theoretical category of social activity has been extended to include two distinctive factors: cooperative dialogue and social interaction. Implications of these findings for the LLAF project are discussed.Keywords: constructivist learning, higher education, mix-methodology, structural equation modeling
Procedia PDF Downloads 315815 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 285814 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City
Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao
Abstract:
Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership
Procedia PDF Downloads 142813 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor
Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang
Abstract:
To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel
Procedia PDF Downloads 354