Search results for: construction risk assessment model
563 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study
Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart
Abstract:
Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning
Procedia PDF Downloads 100562 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128561 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 46560 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress
Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor
Abstract:
Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour
Procedia PDF Downloads 137559 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 160558 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment
Authors: Emmanuel Ogala
Abstract:
Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment
Procedia PDF Downloads 140557 Civic E-Participation in Central and Eastern Europe: A Comparative Analysis
Authors: Izabela Kapsa
Abstract:
Civic participation is an important aspect of democracy. The contemporary model of democracy is based on citizens' participation in political decision-making (deliberative democracy, participatory democracy). This participation takes many forms of activities like display of slogans and symbols, voting, social consultations, political demonstrations, membership in political parties or organizing civil disobedience. The countries of Central and Eastern Europe after 1989 are characterized by great social, economic and political diversity. Civil society is also part of the process of democratization. Civil society, funded by the rule of law, civil rights, such as freedom of speech and association and private ownership, was to play a central role in the development of liberal democracy. Among the many interpretations of concepts, defining the concept of contemporary democracy, one can assume that the terms civil society and democracy, although different in meaning, nowadays overlap. In the post-communist countries, the process of shaping and maturing societies took place in the context of a struggle with a state governed by undemocratic power. State fraud or repudiation of the institution is a representative state, which in the past was the only way to manifest and defend its identity, but after the breakthrough became one of the main obstacles to the development of civil society. In Central and Eastern Europe, there are many obstacles to the development of civil society, for example, the elimination of economic poverty, the implementation of educational campaigns, consciousness-related obstacles, the formation of social capital and the deficit of social activity. Obviously, civil society does not only entail an electoral turnout but a broader participation in the decision-making process, which is impossible without direct and participative democratic institutions. This article considers such broad forms of civic participation and their characteristics in Central and Eastern Europe. The paper is attempts to analyze the functioning of electronic forms of civic participation in Central and Eastern European states. This is not accompanied by a referendum or a referendum initiative, and other forms of political participation, such as public consultations, participative budgets, or e-Government. However, this paper will broadly present electronic administration tools, the application of which results from both legal regulations and increasingly common practice in state and city management. In the comparative analysis, the experiences of post-communist bloc countries will be summed up to indicate the challenges and possible goals for further development of this form of citizen participation in the political process. The author argues that for to function efficiently and effectively, states need to involve their citizens in the political decision-making process, especially with the use of electronic tools.Keywords: Central and Eastern Europe, e-participation, e-government, post-communism
Procedia PDF Downloads 193556 The Effect of Metal-Organic Framework Pore Size to Hydrogen Generation of Ammonia Borane via Nanoconfinement
Authors: Jing-Yang Chung, Chi-Wei Liao, Jing Li, Bor Kae Chang, Cheng-Yu Wang
Abstract:
Chemical hydride ammonia borane (AB, NH3BH3) draws attentions to hydrogen energy researches for its high theoretical gravimetrical capacity (19.6 wt%). Nevertheless, the elevated AB decomposition temperatures (Td) and unwanted byproducts are main hurdles in practical application. It was reported that the byproducts and Td can be reduced with nanoconfinement technique, in which AB molecules are confined in porous materials, such as porous carbon, zeolite, metal-organic frameworks (MOFs), etc. Although nanoconfinement empirically shows effectiveness on hydrogen generation temperature reduction in AB, the theoretical mechanism is debatable. Low Td was reported in AB@IRMOF-1 (Zn4O(BDC)3, BDC = benzenedicarboxylate), where Zn atoms form closed metal clusters secondary building unit (SBU) with no exposed active sites. Other than nanosized hydride, it was also observed that catalyst addition facilitates AB decomposition in the composite of Li-catalyzed carbon CMK-3, MOF JUC-32-Y with exposed Y3+, etc. It is believed that nanosized AB is critical for lowering Td, while active sites eliminate byproducts. Nonetheless, some researchers claimed that it is the catalytic sites that are the critical factor to reduce Td, instead of the hydride size. The group physically ground AB with ZIF-8 (zeolitic imidazolate frameworks, (Zn(2-methylimidazolate)2)), and found similar reduced Td phenomenon, even though AB molecules were not ‘confined’ or forming nanoparticles by physical hand grinding. It shows the catalytic reaction, not nanoconfinement, leads to AB dehydrogenation promotion. In this research, we explored the possible criteria of hydrogen production temperature from nanoconfined AB in MOFs with different pore sizes and active sites. MOFs with metal SBU such as Zn (IRMOF), Zr (UiO), and Al (MIL-53), accompanying with various organic ligands (BDC and BPDC; BPDC = biphenyldicarboxylate) were modified with AB. Excess MOFs were used for AB size constrained in micropores estimated by revisiting Horvath-Kawazoe model. AB dissolved in methanol was added to MOFs crystalline with MOF pore volume to AB ratio 4:1, and the slurry was dried under vacuum to collect AB@MOF powders. With TPD-MS (temperature programmed desorption with mass spectroscopy), we observed Td was reduced with smaller MOF pores. For example, it was reduced from 100°C to 64°C when MOF micropore ~1 nm, while ~90°C with pore size up to 5 nm. The behavior of Td as a function of AB crystalline radius obeys thermodynamics when the Gibbs free energy of AB decomposition is zero, and no obvious correlation with metal type was observed. In conclusion, we discovered Td of AB is proportional to the reciprocal of MOF pore size, possibly stronger than the effect of active sites.Keywords: ammonia borane, chemical hydride, metal-organic framework, nanoconfinement
Procedia PDF Downloads 187555 Interface Designer as Cultural Producer: A Dialectic Materialist Approach to the Role of Visual Designer in the Present Digital Era
Authors: Cagri Baris Kasap
Abstract:
In this study, how interface designers can be viewed as producers of culture in the current era will be interrogated from a critical theory perspective. Walter Benjamin was a German Jewish literary critical theorist who, during 1930s, was engaged in opposing and criticizing the Nazi use of art and media. ‘The Author as Producer’ is an essay that Benjamin has read at the Communist Institute for the Study of Fascism in Paris. In this article, Benjamin relates directly to the dialectics between base and superstructure and argues that authors, normally placed within the superstructure should consider how writing and publishing is production and directly related to the base. Through it, he discusses what it could mean to see author as producer of his own text, as a producer of writing, understood as an ideological construct that rests on the apparatus of production and distribution. So Benjamin concludes that the author must write in ways that relate to the conditions of production, he must do so in order to prepare his readers to become writers and even make this possible for them by engineering an ‘improved apparatus’ and must work toward turning consumers to producers and collaborators. In today’s world, it has become a leading business model within Web 2.0 services of multinational Internet technologies and culture industries like Amazon, Apple and Google, to transform readers, spectators, consumers or users into collaborators and co-producers through platforms such as Facebook, YouTube and Amazon’s CreateSpace Kindle Direct Publishing print-on-demand, e-book and publishing platforms. However, the way this transformation happens is tightly controlled and monitored by combinations of software and hardware. In these global-market monopolies, it has become increasingly difficult to get insight into how one’s writing and collaboration is used, captured, and capitalized as a user of Facebook or Google. In the lens of this study, it could be argued that this criticism could very well be considered by digital producers or even by the mass of collaborators in contemporary social networking software. How do software and design incorporate users and their collaboration? Are they truly empowered, are they put in a position where they are able to understand the apparatus and how their collaboration is part of it? Or has the apparatus become a means against the producers? Thus, when using corporate systems like Google and Facebook, iPhone and Kindle without any control over the means of production, which is closed off by opaque interfaces and licenses that limit our rights of use and ownership, we are already the collaborators that Benjamin calls for. For example, the iPhone and the Kindle combine a specific use of technology to distribute the relations between the ‘authors’ and the ‘prodUsers’ in ways that secure their monopolistic business models by limiting the potential of the technology.Keywords: interface designer, cultural producer, Walter Benjamin, materialist aesthetics, dialectical thinking
Procedia PDF Downloads 142554 A Vaccination Program to Control an Outbreak of Acute Hepatitis A among MSM in Taiwan, 2016
Authors: Ying-Jung Hsieh, Angela S. Huang, Chu-Ming Chiu, Yu-Min Chou, Chin-Hui Yang
Abstract:
Background and Objectives: Hepatitis A is primarily acquired by the fecal-oral route through person-to-person contact or ingestion of contaminated food or water. During 2010 to 2014, an average of 83 cases of locally-acquired disease was reported to Taiwan’s notifiable disease system. Taiwan Centers for Disease Control (TCDC) identified an outbreak of acute hepatitis A which began in June 2015. Of the 126 cases reported in 2015, 103 (82%) cases were reported during June–December and 95 cases (92%) of them were male. The average age of all male cases was 31 years (median, 29 years; range, 15–76 years). Among the 95 male cases, 49 (52%) were also infected with HIV, and all reported to have had sex with other men. To control this outbreak, TCDC launched a free hepatitis A vaccination program in January 2016 for close contacts of confirmed hepatitis A cases, including family members, sexual partners, and household contacts. Effect of the vaccination program was evaluated. Methods: All cases of hepatitis A reported to the National Notifiable Disease Surveillance System were included. A case of hepatitis A was defined as a locally-acquired disease in a person who had acute clinical symptoms include fever, malaise, loss of appetite, nausea or abdominal discomfort compatible with hepatitis, and tested positive for anti-HAV IgM during June 2015 to June 2016 in Taiwan. The rate of case accumulation was calculated using a simple regression model. Results: During January–June 2016, there were 466 cases of hepatitis A reported; of the 243 (52%) who were also infected with HIV, 232 (95%) had a history of having sex with men. Of the 346 cases that were followed up, 259 (75%) provided information on contacts but only 14 (5%) of them provided the name of their sexual partners. Among the 602 contacts reported, 349 (58%) were family members, 14 (2%) were sexual partners, and 239 (40%) were other household contacts. Among the 602 contacts eligible for free hepatitis A vaccination, 440 (73%) received the vaccine. There were 87 (25%) cases that refused to disclose their close contacts. The average case accumulation rate during January–June 2016 was 21.7 cases per month, which was 6.8 times compared to the average case accumulation rate during June–December 2015 of 3.2 cases per month. Conclusions: Despite vaccination program aimed to provide free hepatitis A vaccine to close contacts of hepatitis A patients, the outbreak continued and even gained momentum in transmission. Refusal by hepatitis A patients to provide names of their close contacts and rejection of contacts to take the hepatitis A vaccine may have contributed to the poor effect of the program. Targeted vaccination efforts of all MSM may be needed to control the outbreak among this population in the short term. In the long term, universal vaccination program is needed to prevent the infection of hepatitis A.Keywords: hepatitis A, HIV, men who have sex with men, vaccination
Procedia PDF Downloads 256553 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 222552 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 201551 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 154550 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport
Authors: Surupa Shaw, Debjyoti Banerjee
Abstract:
Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement
Procedia PDF Downloads 318549 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 378548 Possibilities and Limits for the Development of Care in Primary Health Care in Brazil
Authors: Ivonete Teresinha Schulter Buss Heidemann, Michelle Kuntz Durand, Aline Megumi Arakawa-Belaunde, Sandra Mara Corrêa, Leandro Martins Costa Do Araujo, Kamila Soares Maciel
Abstract:
Primary Health Care is defined as the level of a system of services that enables the achievement of answers to health needs. This level of care produces services and actions of attention to the person in the life cycle and in their health conditions or diseases. Primary Health Care refers to a conception of care model and organization of the health system that in Brazil seeks to reorganize the principles of the Unified Health System. This system is based on the principle of health as a citizen's right and duty of the State. Primary health care has family health as a priority strategy for its organization according to the precepts of the Unified Health System, structured in the logic of new sectoral practices, associating clinical work and health promotion. Thus, this study seeks to know the possibilities and limits of the care developed by professionals working in Primary Health Care. It was conducted by a qualitative approach of the participant action type, based on Paulo Freire's Research Itinerary, which corresponds to three moments: Thematic Investigation; Encoding and Decoding; and, Critical Unveiling. The themes were investigated in a health unit with the development of a culture circle with 20 professionals, from a municipality in southern Brazil, in the first half of 2021. The participants revealed as possibilities the involvement, bonding and strengthening of the interpersonal relationships of the professionals who work in the context of primary care. Promoting welcoming in primary care has favoured care and teamwork, as well as improved access. They also highlighted that care planning, the use of technologies in the process of communication and the orientation of the population enhances the levels of problem-solving capacity and the organization of services. As limits, the lack of professional recognition and the scarce material and human resources were revealed, conditions that generate tensions for health care. The reduction in the number of professionals and the low salary are pointed out as elements that boost the motivation of the health team for the development of the work. The participants revealed that due to COVID-19, the flow of care had as a priority the pandemic situation, which affected health care in primary care, and prevention and health promotion actions were canceled. The study demonstrated that empowerment and professional involvement are fundamental to promoting comprehensive and problem-solving care. However, limits of the teams are observed when exercising their activities, these are related to the lack of human and material resources, and the expansion of public health policies is urgent.Keywords: health promotion, primary health care, health professionals, welcoming.
Procedia PDF Downloads 98547 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 163546 Evidence-Triggers for Care of Patients with Cleft Lip and Palate in Srinagarind Hospital: The Tawanchai Center and Out-Patients Surgical Room
Authors: Suteera Pradubwong, Pattama Surit, Sumalee Pongpagatip, Tharinee Pethchara, Bowornsilp Chowchuen
Abstract:
Background: Cleft lip and palate (CLP) is a congenital anomaly of the lip and palate that is caused by several factors. It was found in approximately one per 500 to 550 live births depending on nationality and socioeconomic status. The Tawanchai Center and out-patients surgical room of Srinagarind Hospital are responsible for providing care to patients with CLP (starting from birth to adolescent) and their caregivers. From the observations and interviews with nurses working in these units, they reported that both patients and their caregivers confronted many problems which affected their physical and mental health. Based on the Soukup’s model (2000), the researchers used evidence triggers from clinical practice (practice triggers) and related literature (knowledge triggers) to investigate the problems. Objective: The purpose of this study was to investigate the problems of care for patients with CLP in the Tawanchai Center and out-patient surgical room of Srinagarind Hospital. Material and Method: The descriptive method was used in this study. For practice triggers, the researchers obtained the data from medical records of ten patients with CLP and from interviewing two patients with CLP, eight caregivers, two nurses, and two assistant workers. Instruments for the interview consisted of a demographic data form and a semi-structured questionnaire. For knowledge triggers, the researchers used a literature search. The data from both practice and knowledge triggers were collected between February and May 2016. The quantitative data were analyzed through frequency and percentage distributions, and the qualitative data were analyzed through a content analysis. Results: The problems of care gained from practice and knowledge triggers were consistent and were identified as holistic issues, including 1) insufficient feeding, 2) risks of respiratory tract infections and physical disorders, 3) psychological problems, such as anxiety, stress, and distress, 4) socioeconomic problems, such as stigmatization, isolation, and loss of income, 5)spiritual problems, such as low self-esteem and low quality of life, 6) school absence and learning limitation, 7) lack of knowledge about CLP and its treatments, 8) misunderstanding towards roles among the multidisciplinary team, 9) no available services, and 10) shortage of healthcare professionals, especially speech-language pathologists (SLPs). Conclusion: From evidence-triggers, the problems of care affect the patients and their caregivers holistically. Integrated long-term care by the multidisciplinary team is needed for children with CLP starting from birth to adolescent. Nurses should provide effective care to these patients and their caregivers by using a holistic approach and working collaboratively with other healthcare providers in the multidisciplinary team.Keywords: evidence-triggers, cleft lip, cleft palate, problems of care
Procedia PDF Downloads 218545 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis
Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh
Abstract:
Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literatureKeywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries
Procedia PDF Downloads 407544 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations
Authors: Maria Inês Pinho
Abstract:
Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management
Procedia PDF Downloads 182543 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69542 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 5541 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes
Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão
Abstract:
The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.Keywords: eddy current separation, particle size, numerical simulation, metal recovery
Procedia PDF Downloads 89540 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 87539 Reframing Physical Activity for Health
Authors: M. Roberts
Abstract:
We Are Undefeatable - is a mass marketing behaviour change campaign that aims to support the least active people living with long term health conditions to be more active. This is an important issue to address because people with long term conditions are an historically underserved community for the sport and physical activity sector and the least active of those with long term conditions have the most to gain in health and wellbeing benefits. The campaign has generated a significant change in the way physical activity is communicated and people with long term conditions are represented in the media and marketing. The goal is to create a social norm around being active. The campaign is led by a unique partnership of organisations: the Richmond Group of Charities (made up of Age UK, Alzheimer’s Society, Asthma + Lung UK, Breast Cancer Now, British Heart Foundation, British Red Cross, Diabetes UK, Macmillan Cancer Support, Rethink Mental Illness, Royal Voluntary Service, Stroke Association, Versus Arthritis) along with Mind, MS Society, Parkinson’s UK and Sport England, with National Lottery Funding. It is underpinned by the COM-B model of behaviour change. It draws on the lived experience of people with multiple long term conditions to shape the look and feel of the campaign and all the resources available. People with long term conditions are the campaign messengers, central to the ethos of the campaign by telling their individual stories of overcoming barriers to be active with their health conditions. The central messaging is about finding a way to be active that works for the individual. We Are Undefeatable is evaluated through a multi-modal approach, including regular qualitative focus groups and a quantitative evaluation tracker undertaken three times a year. The campaign has highlighted the significant barriers to physical activity for people with long term conditions. This has changed the way our partnership talks about physical activity but has also had an impact on the wider sport and physical activity sector, prompting an increasing departure from traditional messaging and marketing approaches for this audience of people with long term conditions. The campaign has reached millions of people since its launch in 2019, through multiple marketing and partnership channels including primetime TV advertising and promotion through health professionals and in health settings. Its diverse storytellers make it relatable to its target audience and the achievable activities highlighted and inclusive messaging inspire our audience to take action as a result of seeing the campaign. The We Are Undefeatable campaign is a blueprint for physical activity campaigns; it not only addresses individual behaviour change but plays a role in addressing systemic barriers to physical activity by sharing the lived experience insight to shape policy and professional practice.Keywords: behaviour change, long term conditions, partnership, relatable
Procedia PDF Downloads 65538 Sustainable Mining Fulfilling Constitutional Responsibilities: A Case Study of NMDC Limited Bacheli in India
Authors: Bagam Venkateswarlu
Abstract:
NMDC Limited, Indian multinational mining company operates under administrative control of Ministry of Steel, Government of India. This study is undertaken to evaluate how sustainable mining practiced by the company fulfils the provisions of Indian Constitution to secure to its citizen – justice, equality of status and opportunity, promoting social, economic, political, and religious wellbeing. The Constitution of India lays down a road map as to how the goal of being a “Welfare State” shall be achieved. The vision of sustainable mining being practiced is oriented along the constitutional responsibilities on Indian Citizens and the Corporate World. This qualitative study shall be backed by quantitative studies of National Mineral Development Corporation performances in various domains of sustainable mining and ESG, that is, environment, social and governance parameters. For example, Five Star Rating of mine is a comprehensive evaluation system introduced by Ministry of Mines, Govt. of India is one of the methodologies. Corporate Social Responsibilities is one of the thrust areas for securing social well-being. Green energy initiatives in and around the mines has given the title of “Eco-Friendly Miner” to NMDC Limited. While operating fully mechanized large scale iron ore mine (18.8 million tonne per annum capacity) in Bacheli, Chhattisgarh, M/s NMDC Limited caters to the needs of mineral security of State of Chhattisgarh and Indian Union. It preserves forest, wild-life, and environment heritage of richly endowed State of Chhattisgarh. In the remote and far-flung interiors of Chhattisgarh, NMDC empowers the local population by providing world class educational & medical facilities, transportation network, drinking water facilities, irrigational agricultural supports, employment opportunities, establishing religious harmony. All this ultimately results in empowered, educated, and improved awareness in population. Thus, the basic tenets of constitution of India- secularism, democracy, welfare for all, socialism, humanism, decentralization, liberalism, mixed economy, and non-violence is fulfilled. Constitution declares India as a welfare state – for the people, of the people and by the people. The sustainable mining practices by NMDC are in line with the objective. Thus, the purpose of study is fully met with. The potential benefit of the study includes replicating this model in existing or new establishments in various parts of country – especially in the under-privileged interiors and far-flung areas which are yet to see the lights of development.Keywords: ESG values, Indian constitution, NMDC limited, sustainable mining, CSR, green energy
Procedia PDF Downloads 75537 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves
Authors: Piotr Wisniewski, Sławomir Dykas
Abstract:
Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows
Procedia PDF Downloads 118536 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices
Authors: Fatemeh Noori
Abstract:
The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling
Procedia PDF Downloads 77535 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study
Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno
Abstract:
The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade
Procedia PDF Downloads 212534 Documentary Project as an Active Learning Strategy in a Developmental Psychology Course
Authors: Ozge Gurcanli
Abstract:
Recent studies in active-learning focus on how student experience varies based on the content (e.g. STEM versus Humanities) and the medium (e.g. in-class exercises versus off-campus activities) of experiential learning. However, little is known whether the variation in classroom time and space within the same active learning context affects student experience. This study manipulated the use of classroom time for the active learning component of a developmental psychology course that is offered at a four-year university in the South-West Region of United States. The course uses a blended model: traditional and active learning. In the traditional learning component of the course, students do weekly readings, listen to lectures, and take midterms. In the active learning component, students make a documentary on a developmental topic as a final project. Students used the classroom time and space for the documentary in two ways: regular classroom time slots that were dedicated to the making of the documentary outside without the supervision of the professor (Classroom-time Outside) and lectures that offered basic instructions about how to make a documentary (Documentary Lectures). The study used the public teaching evaluations that are administered by the Office of Registrar’s. A total of two hundred and seven student evaluations were available across six semesters. Because the Office of Registrar’s presented the data separately without personal identifiers, One-Way ANOVA with four groups (Traditional, Experiential-Heavy: 19% Classroom-time Outside, 12% for Documentary Lectures, Experiential-Moderate: 5-7% for Classroom-time Outside, 16-19% for Documentary Lectures, Experiential Light: 4-7% for Classroom-time Outside, 7% for Documentary Lectures) was conducted on five key features (Organization, Quality, Assignments Contribution, Intellectual Curiosity, Teaching Effectiveness). Each measure used a five-point reverse-coded scale (1-Outstanding, 5-Poor). For all experiential conditions, the documentary counted towards 30% of the final grade. Organization (‘The instructors preparation for class was’), Quality (’Overall, I would rate the quality of this course as’) and Assignment Contribution (’The contribution of the graded work that made to the learning experience was’) did not yield any significant differences across four course types (F (3, 202)=1.72, p > .05, F(3, 200)=.32, p > .05, F(3, 203)=.43, p > .05, respectively). Intellectual Curiosity (’The instructor’s ability to stimulate intellectual curiosity was’) yielded a marginal effect (F (3, 201)=2.61, p = .053). Tukey’s HSD (p < .05) indicated that the Experiential-Heavy (M = 1.94, SD = .82) condition was significantly different than all other three conditions (M =1.57, 1.51, 1.58; SD = .68, .66, .77, respectively) showing that heavily active class-time did not elicit intellectual curiosity as much as others. Finally, Teaching Effectiveness (’Overall, I feel that the instructor’s effectiveness as a teacher was’) was significant (F (3, 198)=3.32, p <.05). Tukey’s HSD (p <.05) showed that students found the courses with moderate (M=1.49, SD=.62) to light (M=1.52, SD=.70) active class-time more effective than heavily active class-time (M=1.93, SD=.69). Overall, the findings of this study suggest that within the same active learning context, the time and the space dedicated to active learning results in different outcomes in intellectual curiosity and teaching effectiveness.Keywords: active learning, learning outcomes, student experience, learning context
Procedia PDF Downloads 190