Search results for: complex simulation suite
1355 Electroactivity of Clostridium saccharoperbutylacetonicum 1-4N during Carbon Dioxide Reduction in a Bioelectrosynthesis System
Authors: Carlos A. Garcia-Mogollon, Juan C. Quintero-Diaz, Claudio Avignone-Rossa
Abstract:
Clostridium saccharoperbutylacetonicum 1-4N (Csb 1-4N) is an industrial reference strain for Acetone-Butanol-Ethanol (ABE) fermentation. Csb 1-4N is a solventogenic clostridium and H₂ producer with a metabolic profile that makes it a good candidate for Bioelectrosynthesis System (BES). The aim of this study was to evaluate the electroactivity of Csb 1-4N by cyclic voltammetry technique (CV). The Bioelectrosynthesis fermentation (BES) started in a Triptone-Yeast extract (TY) medium with trace elements and vitamins, Complex Nitrogen Source (CNS), and bicarbonate (NaHCO₃, 4g/L) as a carbon source, run at -600mVAg/AgCl and adding 200uM NADH. The six BES batches were performed with different media composition with and without NADH, CNS, HCO₃⁻ , and applied potential. The CV was performed as three-electrode system: platinum slice working electrode (WE), nickel contra electrode (CE) and reference electrode Ag/AgCl (ER). CVs were run in a potential range of -0.7V to 0.7V vs. VAg/AgCl at a scan rate 10mV/s. A CV recorded using different NaHCO₃ concentrations (0.25; 0.5; 1.0; 4g/L) were obtained. BES fermentation samples were centrifuged (3000 rpm, 5min, 4C), and supernatant (7mL) was used. CVs were obtained for Csb1-4N BES culture cell-free supernatant at 0h, 24h, and 48h. The electrochemical analysis was carried out with a PalmSens 4.0 potentiostat/galvanostat controlled with the PStrace 5.7 software, and CVs curves were characterized by reduction and oxidation currents and reduction and oxidation peaks. The CVs obtained for NaHCO₃ solutions showed that the reduction current and oxidation current decreased as the NaHCO₃ concentration was decreased. All reduction and oxidation currents decreased until exponential growth stop (24h), independence of initial cathodic current, except in medium with trace elements, vitamins, and NaHCO3, in which reduction current was around half at 24h and followed decreasing at 48. In this medium, Csb1-4N did not grow, but pH was increased, indicating that NaHCO₃ was reduced as the reduction current decreased. In general, at 48h reduction currents did not present important changes between different mediums in BES cultures. In terms of intensities in the peaks (Ip) did not present important variations; except with Ipa and Ipc in BES culture with NaHCO₃ and NADH added are higher than peaks in other cultures. Based on results, cathodic and anodic currents changes were induced by NaHCO₃ reduction reactions during Csb1-4N metabolic activity in different BES experiments.Keywords: clostridium saccharoperbutylacetonicum 1-4N, bioelectrosynthesis, carbon dioxide fixation, cyclic voltammetry
Procedia PDF Downloads 1391354 LTE Modelling of a DC Arc Ignition on Cold Electrodes
Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov
Abstract:
The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes
Procedia PDF Downloads 1251353 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning
Authors: Madhawa Basnayaka, Jouni Paltakari
Abstract:
Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.Keywords: artificial intelligence, chipless RFID, deep learning, machine learning
Procedia PDF Downloads 511352 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 1251351 Unraveling Language Dynamics: A Case Study of Language in Education in Pakistan
Authors: Naseer Ahmad
Abstract:
This research investigates the intricate dynamics of language policy, ideology, and the choice of educational language as a medium of instruction in rural Pakistan. Focused on addressing the complexities of language practices in underexplored educational contexts, the study employed a case study approach, analyzing interviews with education authorities, teachers, and students, alongside classroom observations in English-medium and Urdu-medium rural schools. The research underscores the significance of understanding linguistic diversity within rural communities. The analysis of interviews and classroom observations revealed that language policies in rural schools are influenced by multiple factors, including historical legacies, societal language ideologies, and government directives. The dominance of Urdu and English as the preferred languages of instruction reflected a broader language hierarchy, where regional languages are often marginalized. This language ideology perpetuates a sense of linguistic inferiority among students who primarily speak regional languages. The impact of language choices on students' learning experiences and outcomes is a central focus of the research. It became evident that while policies advocate for specific language practices, the implementation often diverges due to multifarious socio-cultural, economic, and institutional factors. This disparity significantly impacts the effectiveness of educational processes, influencing pedagogical approaches, student engagement, academic outcomes, social mobility, and language choices. Based on the findings, the study concluded that due to policy and practice gap, rural people have complex perceptions and language choices. They perceived Urdu as a national, lingua franca, cultural, easy, or low-status language. They perceived English as an international, lingua franca, modern, difficult, or high-status language. They perceived other languages as mother tongue, local, religious, or irrelevant languages. This research provided insights that are crucial for theory, policy, and practice, addressing educational inequities and inclusive language policies. It set the stage for further research and advocacy efforts in the realm of language policies in diverse educational settings.Keywords: language-in-education policy, language ideology, educational language choice, pakistan
Procedia PDF Downloads 741350 Characterization, Replication and Testing of Designed Micro-Textures, Inspired by the Brill Fish, Scophthalmus rhombus, for the Development of Bioinspired Antifouling Materials
Authors: Chloe Richards, Adrian Delgado Ollero, Yan Delaure, Fiona Regan
Abstract:
Growing concern about the natural environment has accelerated the search for non-toxic, but at the same time, economically reasonable, antifouling materials. Bioinspired surfaces, due to their nano and micro topographical antifouling capabilities, provide a hopeful approach to the design of novel antifouling surfaces. Biological organisms are known to have highly evolved and complex topographies, demonstrating antifouling potential, i.e. shark skin. Previous studies have examined the antifouling ability of topographic patterns, textures and roughness scales found on natural organisms. One of the mechanisms used to explain the adhesion of cells to a substrate is called attachment point theory. Here, the fouling organism experiences increased attachment where there are multiple attachment points and reduced attachment, where the number of attachment points are decreased. In this study, an attempt to characterize the microtopography of the common brill fish, Scophthalmus rhombus, was undertaken. Scophthalmus rhombus is a small flatfish of the family Scophthalmidae, inhabiting regions from Norway to the Mediterranean and the Black Sea. They reside in shallow sandy and muddy coastal areas at depths of around 70 – 80 meters. Six engineered surfaces (inspired by the Brill fish scale) produced by a 2-photon polymerization (2PP) process were evaluated for their potential as an antifouling solution for incorporation onto tidal energy blades. The micro-textures were analyzed for their AF potential under both static and dynamic laboratory conditions using two laboratory grown diatom species, Amphora coffeaeformis and Nitzschia ovalis. The incorporation of a surface topography was observed to cause a disruption in the growth of A. coffeaeformis and N. ovalis cells on the surface in comparison to control surfaces. This work has demonstrated the importance of understanding cell-surface interaction, in particular, topography for the design of novel antifouling technology. The study concluded that biofouling can be controlled by physical modification, and has contributed significant knowledge to the use of a successful novel bioinspired AF technology, based on Brill, for the first time.Keywords: attachment point theory, biofouling, Scophthalmus rhombus, topography
Procedia PDF Downloads 1091349 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 2131348 Increasing Access to Upper Limb Reconstruction in Cervical Spinal Cord Injury
Authors: Michelle Jennett, Jana Dengler, Maytal Perlman
Abstract:
Background: Cervical spinal cord injury (SCI) is a devastating event that results in upper limb paralysis, loss of independence, and disability. People living with cervical SCI have identified improvement of upper limb function as a top priority. Nerve and tendon transfer surgery has successfully restored upper limb function in cervical SCI but is not universally used or available to all eligible individuals. This exploratory mixed-methods study used an implementation science approach to better understand these factors that influence access to upper limb reconstruction in the Canadian context and design an intervention to increase access to care. Methods: Data from the Canadian Institute for Health Information’s Discharge Abstracts Database (CIHI-DAD) and the National Ambulatory Care Reporting System (NACRS) were used to determine the annual rate of nerve transfer and tendon transfer surgeries performed in cervical SCI in Canada over the last 15 years. Semi-structured interviews informed by the consolidated framework for implementation research (CFIR) were used to explore Ontario healthcare provider knowledge and practices around upper limb reconstruction. An inductive, iterative constant comparative process involving descriptive and interpretive analyses was used to identify themes that emerged from the data. Results: Healthcare providers (n = 10 upper extremity surgeons, n = 10 SCI physiatrists, n = 12 physical and occupational therapists working with individuals with SCI) were interviewed about their knowledge and perceptions of upper limb reconstruction and their current practices and discussions around upper limb reconstruction. Data analysis is currently underway and will be presented. Regional variation in rates of upper limb reconstruction and trends over time are also currently being analyzed. Conclusions: Utilization of nerve and tendon transfer surgery to improve upper limb reconstruction in Canada remains low. There are a complex array of interrelated individual-, provider- and system-level barriers that prevent individuals with cervical SCI from accessing upper limb reconstruction. In order to offer equitable access to care, a multi-modal approach addressing current barriers is required.Keywords: cervical spinal cord injury, nerve and tendon transfer surgery, spinal cord injury, upper extremity reconstruction
Procedia PDF Downloads 991347 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 2231346 On the Right an Effective Administrative Justice in the Republic of Macedonia: Challenges and Problems
Authors: Arlinda Memetaj
Abstract:
A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of effective public administration, has been since 1990s among the most 'important and urgent' final strategic objectives of the Republic of Macedonia. To this aim the country has so far adopted a huge series of legislative and strategic documents related to any aspects of the administrative justice system. The latter is designed to strengthen the legal position of citizens, businesses, civic organizations, and other societal subjects. 'Changes and reforms' in this field have been thus the most frequent terms being used in the country for the last more than 20 years. Several years ago the County established Administrative Courts, while permanently amending the Law on the General Administrative procedure (LGAP). The new LGAP was adopted in 2015 and it introduced considerable innovations concerned. The most recent inputs in this regard includes the National Public Administration Reform Strategy 2017 – 2022, one of the key expected result of which includes both providing effective protection of the citizens` rights. In doing the aforesaid however there is still a series of interrelated shortcomings in this regard, such as (just to mention few) the complex appeal procedure, delays in enforcing court rulings, etc. Against the above background, the paper firstly describes the Macedonian institutional and legislative framework in the above field, and then illustrates the shortcomings therein. It finally claims that the current status quo situation may be overcome only if there is a proper implementation of the administrative courts decisions and far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main method used in this paper is the descriptive, analytical and comparative one due to the very character of the paper itself.Keywords: administrative justice, administrative procedure, administrative courts/disputes, European Human Rights Court, human rights, monitoring, reform, benefit.
Procedia PDF Downloads 1581345 Deep Injection Wells for Flood Prevention and Groundwater Management
Authors: Mohammad R. Jafari, Francois G. Bernardeau
Abstract:
With its arid climate, Qatar experiences low annual rainfall, intense storms, and high evaporation rates. However, the fast-paced rate of infrastructure development in the capital city of Doha has led to recurring instances of surface water flooding as well as rising groundwater levels. Public Work Authority (PWA/ASHGHAL) has implemented an approach to collect and discharge the flood water into a) positive gravity systems; b) Emergency Flooding Area (EFA) – Evaporation, Infiltration or Storage off-site using tankers; and c) Discharge to deep injection wells. As part of the flood prevention scheme, 21 deep injection wells have been constructed to discharge the collected surface and groundwater table in Doha city. These injection wells function as an alternative in localities that do not possess either positive gravity systems or downstream networks that can accommodate additional loads. These injection wells are 400-m deep and are constructed in a complex karstic subsurface condition with large cavities. The injection well system will discharge collected groundwater and storm surface runoff into the permeable Umm Er Radhuma Formation, which is an aquifer present throughout the Persian Gulf Region. The Umm Er Radhuma formation contains saline water that is not being used for water supply. The injection zone is separated by an impervious gypsum formation which acts as a barrier between upper and lower aquifer. State of the art drilling, grouting, and geophysical techniques have been implemented in construction of the wells to assure that the shallow aquifer would not be contaminated and impacted by injected water. Injection and pumping tests were performed to evaluate injection well functionality (injectability). The results of these tests indicated that majority of the wells can accept injection rate of 200 to 300 m3 /h (56 to 83 l/s) under gravity with average value of 250 m3 /h (70 l/s) compared to design value of 50 l/s. This paper presents design and construction process and issues associated with these injection wells, performing injection/pumping tests to determine capacity and effectiveness of the injection wells, the detailed design of collection system and conveying system into the injection wells, and the operation and maintenance process. This system is completed now and is under operation, and therefore, construction of injection wells is an effective option for flood control.Keywords: deep injection well, flood prevention scheme, geophysical tests, pumping and injection tests, wellhead assembly
Procedia PDF Downloads 1201344 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills
Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu
Abstract:
Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.Keywords: acetabular fractures, articular congruency, surgical skills, vocational training
Procedia PDF Downloads 2071343 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario
Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad
Abstract:
One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)
Procedia PDF Downloads 3041342 Characterization of the Blood Microbiome in Rheumatoid Arthritis Patients Compared to Healthy Control Subjects Using V4 Region 16S rRNA Sequencing
Authors: D. Hammad, D. P. Tonge
Abstract:
Rheumatoid arthritis (RA) is a disabling and common autoimmune disease during which the body's immune system attacks healthy tissues. This results in complicated and long-lasting actions being carried out by the immune system, which typically only occurs when the immune system encounters a foreign object. In the case of RA, the disease affects millions of people and causes joint inflammation, ultimately leading to the destruction of cartilage and bone. Interestingly, the disease mechanism still remains unclear. It is likely that RA occurs as a result of a complex interplay of genetic and environmental factors including an imbalance in the microorganism population inside our body. The human microbiome or microbiota is an extensive community of microorganisms in and on the bodies of animals, which comprises bacteria, fungi, viruses, and protozoa. Recently, the development of molecular techniques to characterize entire bacterial communities has renewed interest in the involvement of the microbiome in the development and progression of RA. We believe that an imbalance in some of the specific bacterial species in the gut, mouth and other sites may lead to atopobiosis; the translocation of these organisms into the blood, and that this may lead to changes in immune system status. The aim of this study was, therefore, to characterize the microbiome of RA serum samples in comparison to healthy control subjects using 16S rRNA gene amplification and sequencing. Serum samples were obtained from healthy control volunteers and from patients with RA both prior to, and following treatment. The bacterial community present in each sample was identified utilizing V4 region 16S rRNA amplification and sequencing. Bacterial identification, to the lowest taxonomic rank, was performed using a range of bioinformatics tools. Significantly, the proportions of the Lachnospiraceae, Ruminococcaceae, and Halmonadaceae families were significantly increased in the serum of RA patients compared with healthy control serum. Furthermore, the abundance of Bacteroides and Lachnospiraceae nk4a136_group, Lachnospiraceae_UGC-001, RuminococcaceaeUCG-014, Rumnococcus-1, and Shewanella was also raised in the serum of RA patients relative to healthy control serum. These data support the notion of a blood microbiome and reveal RA-associated changes that may have significant implications for biomarker development and may present much-needed opportunities for novel therapeutic development.Keywords: blood microbiome, gut and oral bacteria, Rheumatoid arthritis, 16S rRNA gene sequencing
Procedia PDF Downloads 1341341 Hope in the Ruins of 'Ozymandias': Reimagining Temporal Horizons in Felicia Hemans 'the Image in Lava'
Authors: Lauren Schuldt Wilson
Abstract:
Felicia Hemans’ memorializing of the unwritten lives of women and the consequent allowance for marginalized voices to remember and be remembered has been considered by many critics in terms of ekphrasis and elegy, terms which privilege the question of whether Hemans’ poeticizing can represent lost voices of history or only her poetic expression. Amy Gates, Brian Elliott, and others point out Hemans’ acknowledgement of the self-projection necessary for imaginatively filling the absences of unrecorded histories. Yet, few have examined the complex temporal positioning Hemans inscribes in these moments of self-projection and imaginative historicizing. In poems like ‘The Image in Lava,’ Hemans maps not only a lost past, but also a lost potential future onto the image of a dead infant in its mother’s arms, the discovery and consideration of which moves the imagined viewer to recover and incorporate the ‘hope’ encapsulated in the figure of the infant into a reevaluation of national time embodied by the ‘relics / Left by the pomps of old.’ By examining Hemans’ acknowledgement and response to Percy Bysshe Shelley’s ‘Ozymandias,’ this essay explores how Hemans’ depictions of imaginative historicizing open new horizons of possibility and reevaluate temporal value structures by imagining previously undiscovered or unexplored potentialities of the past. Where Shelley’s poem mocks the futility of national power and time, this essay outlines Hemans’ suggestion of alternative threads of identity and temporal meaning-making which, regardless of historical veracity, exist outside of and against the structures Shelley challenges. Counter to previous readings of Hemans’ poem as celebration of either recovered or poetically constructed maternal love, this essay argues that Hemans offers a meditation on sites of reproduction—both of personal reproductive futurity and of national reproduction of power. This meditation culminates in Hemans’ gesturing towards a method of historicism by which the imagined viewer reinvigorates the sterile, ‘shattered visage’ of national time by forming temporal identity through the imagining of trans-historical hope inscribed on the infant body of the universal, individual subject rather than the broken monument of the king.Keywords: futurity, national temporalities, reproduction, revisionary histories
Procedia PDF Downloads 1691340 Configuring Resilience and Environmental Sustainability to Achieve Superior Performance under Differing Conditions of Transportation Disruptions
Authors: Henry Ataburo, Dominic Essuman, Emmanuel Kwabena Anin
Abstract:
Recent trends of catastrophic events, such as the Covid-19 pandemic, the Suez Canal blockage, the Russia-Ukraine conflict, the Israel-Hamas conflict, and the climate change crisis, continue to devastate supply chains and the broader society. Prior authors have advocated for a simultaneous pursuit of resilience and sustainability as crucial for navigating these challenges. Nevertheless, the relationship between resilience and sustainability is a rather complex one: resilience and sustainability are considered unrelated, substitutes, or complements. Scholars also suggest that different firms prioritize resilience and sustainability differently for varied strategic reasons. However, we know little about whether, how, and when these choices produce different typologies of firms to explain differences in financial and market performance outcomes. This research draws inferences from the systems configuration approach to organizational fit to contend that a taxonomy of firms may emerge based on how firms configure resilience and environmental sustainability. The study further examines the effects of these taxonomies on financial and market performance in differing transportation disruption conditions. Resilience is operationalized as a firm’s ability to adjust current operations, structure, knowledge, and resources in response to disruptions, whereas environmental sustainability is operationalized as the extent to which a firm deploys resources judiciously and keeps the ecological impact of its operations to the barest minimum. Using primary data from 199 firms in Ghana and cluster analysis as an analytical tool, the study identifies four clusters of firms based on how they prioritize resilience and sustainability: Cluster 1 - "strong, moderate resilience, high sustainability firms," Cluster 2 - "sigh resilience, high sustainability firms," Cluster 3 - "high resilience, strong, moderate sustainability firms," and Cluster 4 - "weak, moderate resilience, strong, moderate sustainability firms". In addition, ANOVA and regression analysis revealed the following findings: Only clusters 1 and 2 were significantly associated with both market and financial performance. Under high transportation disruption conditions, cluster 1 firms excel better in market performance, whereas cluster 2 firms excel better in financial performance. Conversely, under low transportation disruption conditions, cluster 1 firms excel better in financial performance, whereas cluster 2 firms excel better in market performance. The study provides theoretical and empirical evidence of how resilience and environmental sustainability can be configured to achieve specific performance objectives under different disruption conditions.Keywords: resilience, environmental sustainability, developing economy, transportation disruption
Procedia PDF Downloads 721339 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 931338 In silico Statistical Prediction Models for Identifying the Microbial Diversity and Interactions Due to Fixed Periodontal Appliances
Authors: Suganya Chandrababu, Dhundy Bastola
Abstract:
Like in the gut, the subgingival microbiota plays a crucial role in oral hygiene, health, and cariogenic diseases. Human activities like diet, antibiotics, and periodontal treatments alter the bacterial communities, metabolism, and functions in the oral cavity, leading to a dysbiotic state and changes in the plaques of orthodontic patients. Fixed periodontal appliances hinder oral hygiene and cause changes in the dental plaques influencing the subgingival microbiota. However, the microbial species’ diversity and complexity pose a great challenge in understanding the taxa’s community distribution patterns and their role in oral health. In this research, we analyze the subgingival microbial samples from individuals with fixed dental appliances (metal/clear) using an in silico approach. We employ exploratory hypothesis-driven multivariate and regression analysis to shed light on the microbial community and its functional fluctuations due to dental appliances used and identify risks associated with complex disease phenotypes. Our findings confirm the changes in oral microbiota composition due to the presence and type of fixed orthodontal devices. We identified seven main periodontic pathogens, including Bacteroidetes, Actinobacteria, Proteobacteria, Fusobacteria, and Firmicutes, whose abundances were significantly altered due to the presence and type of fixed appliances used. In the case of metal braces, the abundances of Bacteroidetes, Proteobacteria, Fusobacteria, Candidatus saccharibacteria, and Spirochaetes significantly increased, while the abundance of Firmicutes and Actinobacteria decreased. However, in individuals With clear braces, the abundance of Bacteroidetes and Candidatus saccharibacteria increased. The highest abundance value (P-value=0.004 < 0.05) was observed with Bacteroidetes in individuals with the metal appliance, which is associated with gingivitis, periodontitis, endodontic infections, and odontogenic abscesses. Overall, the bacterial abundances decrease with clear type and increase with metal type of braces. Regression analysis further validated the multivariate analysis of variance (MANOVA) results, supporting the hypothesis that the presence and type of the fixed oral appliances significantly alter the bacterial abundance and composition.Keywords: oral microbiota, statistical analysis, fixed or-thodontal appliances, bacterial abundance, multivariate analysis, regression analysis
Procedia PDF Downloads 1971337 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 1961336 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1611335 Second Time’s a Charm: The Intervention of the European Patent Office on the Strategic Use of Divisional Applications
Authors: Alissa Lefebre
Abstract:
It might seem intuitive to hope for a fast decision on the patent grant. After all, a granted patent provides you with a monopoly position, which allows you to obstruct others from using your technology. However, this does not take into account the strategic advantages one can obtain from keeping their patent applications pending. First, you have the financial advantage of postponing certain fees, although many applicants would probably agree that this is not the main benefit. As the scope of the patent protection is only decided upon at the grant, the pendency period introduces uncertainty amongst rivals. This uncertainty entails not knowing whether the patent will actually get granted and what the scope of protection will be. Consequently, rivals can only depend upon limited and uncertain information when deciding what technology is worth pursuing. One way to keep patent applications pending, is the use of divisional applications. These applicants can be filed out of a parent application as long as that parent application is still pending. This allows the applicant to pursue (part of) the content of the parent application in another application, as the divisional application cannot exceed the scope of the parent application. In a fast-moving and complex market such as the tele- and digital communications, it might allow applicants to obtain an actual monopoly position as competitors are discouraged to pursue a certain technology. Nevertheless, this practice also has downsides to it. First of all, it has an impact on the workload of the examiners at the patent office. As the number of patent filings have been increasing over the last decades, using strategies that increase this number even more, is not desirable from the patent examiners point of view. Secondly, a pending patent does not provide you with the protection of a granted patent, thus not only create uncertainty for the rivals, but also for the applicant. Consequently, the European patent office (EPO) has come up with a “raising the bar initiative” in which they have decided to tackle the strategic use of divisional applications. Over the past years, two rules have been implemented. The first rule in 2010 introduced a time limit, upon which divisional applications could only be filed within a 24-month limit after the first communication with the patent office. However, after carrying-out a user feedback survey, the EPO abolished the rule again in 2014 and replaced it by a fee mechanism. The fee mechanism is still in place today, which might be an indication of a better result compared to the first rule change. This study tests the impact of these rules on the strategic use of divisional applications in the tele- and digital communication industry and provides empirical evidence on their success. Upon using three different survival models, we find overall evidence that divisional applications prolong the pendency time and that only the second rule is able to tackle the strategic patenting and thus decrease the pendency time.Keywords: divisional applications, regulatory changes, strategic patenting, EPO
Procedia PDF Downloads 1331334 The Influence of the Soil in the Vegetation of the Luki Biosphere Reserve in the Democratic Republic of Congo
Authors: Sarah Okende
Abstract:
It is universally recognized that the forests of the Congo Basin remain a common good and a complex ecosystem, and insufficiently known. Historically and throughout the world, forests have been valued for the multiple products and benefits they provide. In addition to their major role in the conservation of global biodiversity and in the fight against climate change, these forests also have an essential role in the regional and global ecology. This is particularly the case of the Luki Biosphere Reserve, a highly diversified evergreen Guinean-Congolese rainforest. Despite the efforts of sustainable management of the said reserve, the understanding of the place occupied by the soil under the influence of the latter does not seem to be an interesting subject for the general public or even scientists. The Luki biosphere reserve is located in the west of the DRC, more precisely in the south-east of Mayombe Congolais, in the province of Bas-Congo. The vegetation of the Luki Biosphere Reserve is very heterogeneous and diversified. It ranges from grassy formations to semi-evergreen dense humid forests, passing through edaphic formations on hydromorphic soils (aquatic and semi-aquatic vegetation; messicole and segetal vegetation; gascaricole vegetation; young secondary forests with Musanga cercropioides, Xylopia aethiopica, Corynanthe paniculata; mature secondary forests with Terminalia superba and Hymenostegia floribunda; primary forest with Prioria balsamifera; climax forests with Gilbertiodendron dewevrei, and Gilletiodendron kisantuense). Field observations and reading of previous and up-to-date work carried out in the Luki biosphere reserve are the methodological approaches for this study, the aim of which is to show the impact of soil types in determining the varieties of vegetation. The results obtained prove that the four different types of soil present (purplish red soils, developing on amphibolites; red soils, developed on gneisses; yellow soils occurring on gneisses and quartzites; and alluvial soils, developed on recent alluvium) have a major influence apart from other environmental factors on the determination of different facies of the vegetation of the Luki Biosphere Reserve. In conclusion, the Luki Biosphere Reserve is characterized by a wide variety of biotopes determined by the nature of the soil, the relief, the microclimates, the action of man, or the hydrography. Overall management (soil, biodiversity) in the Luki Biosphere Reserve is important for maintaining the ecological balance.Keywords: soil, biodiversity, forest, Luki, rainforest
Procedia PDF Downloads 841333 Analysis of Lift Force in Hydrodynamic Transport of a Finite Sized Particle in Inertial Microfluidics with a Rectangular Microchannel
Authors: Xinghui Wu, Chun Yang
Abstract:
Inertial microfluidics is a competitive fluidic method with applications in separation of particles, cells and bacteria. In contrast to traditional microfluidic devices with low Reynolds number, inertial microfluidics works in the intermediate Re number range which brings about several intriguing inertial effects on particle separation/focusing to meet the throughput requirement in the real-world. Geometric modifications to make channels become irregular shapes can leverage fluid inertia to create complex secondary flow for adjusting the particle equilibrium positions and thus enhance the separation resolution and throughput. Although inertial microfluidics has been extensively studied by experiments, our current understanding of its mechanisms is poor, making it extremely difficult to build rational-design guidelines for the particle focusing locations, especially for irregularly shaped microfluidic channels. Inertial particle microfluidics in irregularly shaped channels were investigated in our group. There are several fundamental issues that require us to address. One of them is about the balance between the inertial lift forces and the secondary drag forces. Also, it is critical to quantitatively describe the dependence of the life forces on particle-particle interactions in irregularly shaped channels, such as a rectangular one. To provide physical insights into the inertial microfluidics in channels of irregular shapes, in this work the immersed boundary-lattice Boltzmann method (IB-LBM) was introduced and validated to explore the transport characteristics and the underlying mechanisms of an inertial focusing single particle in a rectangular microchannel. The transport dynamics of a finitesized particle were investigated over wide ranges of Reynolds number (20 < Re < 500) and particle size. The results show that the inner equilibrium positions are more difficult to occur in the rectangular channel, which can be explained by the secondary flow caused by the presence of a finite-sized particle. Furthermore, force decoupling analysis was utilized to study the effect of each type of lift force on the inertia migration, and a theoretical model for the lateral lift force of a finite-sized particle in the rectangular channel was established. Such theoretical model can be used to provide theoretical guidance for the design and operation of inertial microfluidics.Keywords: inertial microfluidics, particle focuse, life force, IB-LBM
Procedia PDF Downloads 731332 An Experimental Study of Scalar Implicature Processing in Chinese
Authors: Liu Si, Wang Chunmei, Liu Huangmei
Abstract:
A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language
Procedia PDF Downloads 3641331 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal
Authors: Saad Mohamed Elsaid Onaizah
Abstract:
One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.Keywords: waste water, pesticides pollution, adsorption, activated carbon
Procedia PDF Downloads 831330 The Impact of Task Type and Group Size on Dialogue Argumentation between Students
Authors: Nadia Soledad Peralta
Abstract:
Within the framework of socio-cognitive interaction, argumentation is understood as a psychological process that supports and induces reasoning and learning. Most authors emphasize the great potential of argumentation to negotiate with contradictions and complex decisions. So argumentation is a target for researchers who highlight the importance of social and cognitive processes in learning. In the context of social interaction among university students, different types of arguments are analyzed according to group size (dyads and triads) and the type of task (reading of frequency tables, causal explanation of physical phenomena, the decision regarding moral dilemma situations, and causal explanation of social phenomena). Eighty-nine first-year social sciences students of the National University of Rosario participated. Two groups were formed from the results of a pre-test that ensured the heterogeneity of points of view between participants. Group 1 consisted of 56 participants (performance in dyads, total: 28), and group 2 was formed of 33 participants (performance in triads, total: 11). A quasi-experimental design was performed in which effects of the two variables (group size and type of task) on the argumentation were analyzed. Three types of argumentation are described: authentic dialogical argumentative resolutions, individualistic argumentative resolutions, and non-argumentative resolutions. The results indicate that individualistic arguments prevail in dyads. That is, although people express their own arguments, there is no authentic argumentative interaction. Given that, there are few reciprocal evaluations and counter-arguments in dyads. By contrast, the authentically dialogical argument prevails in triads, showing constant feedback between participants’ points of view. It was observed that, in general, the type of task generates specific types of argumentative interactions. However, it is possible to emphasize that the authentically dialogic arguments predominate in the logical tasks, whereas the individualists or pseudo-dialogical are more frequent in opinion tasks. Nerveless, these relationships between task type and argumentative mode are best clarified in an interactive analysis based on group size. Finally, it is important to stress the value of dialogical argumentation in educational domains. Argumentative function not only allows a metacognitive reflection about their own point of view but also allows people to benefit from exchanging points of view in interactive contexts.Keywords: sociocognitive interaction, argumentation, university students, size of the grup
Procedia PDF Downloads 851329 Laser Powder Bed Fusion Awareness for Engineering Students in France and Qatar
Authors: Hiba Naccache, Rima Hleiss
Abstract:
Additive manufacturing AM or 3D printing is one of the pillars of Industry 4.0. Compared to traditional manufacturing, AM provides a prototype before production in order to optimize the design and avoid the stock market and uses strictly necessary material which can be recyclable, for the benefit of leaning towards local production, saving money, time and resources. Different types of AM exist and it has a broad range of applications across several industries like aerospace, automotive, medicine, education and else. The Laser Powder Bed Fusion (LPBF) is a metal AM technique that uses a laser to liquefy metal powder, layer by layer, to build a three-dimensional (3D) object. In industry 4.0 and aligned with the numbers 9 (Industry, Innovation and Infrastructure) and 12 (Responsible Production and Consumption) of the Sustainable Development Goals of the UNESCO 2030 Agenda, the AM’s manufacturers committed to minimizing the environmental impact by being sustainable in every production. The LPBF has several environmental advantages, like reduced waste production, lower energy consumption, and greater flexibility in creating components with lightweight and complex geometries. However, LPBF also have environmental drawbacks, like energy consumption, gas consumption and emissions. It is critical to recognize the environmental impacts of LPBF in order to mitigate them. To increase awareness and promote sustainable practices regarding LPBF, the researchers use the Elaboration Likelihood Model (ELM) theory where people from multiple universities in France and Qatar process information in two ways: peripherally and centrally. The peripheral campaigns use superficial cues to get attention, and the central campaigns provide clear and concise information. The authors created a seminar including a video showing LPBF production and a website with educational resources. The data is collected using questionnaire to test attitude about the public awareness before and after the seminar. The results reflected a great shift on the awareness toward LPBF and its impact on the environment. With no presence of similar research, to our best knowledge, this study will add to the literature on the sustainability of the LPBF production technique.Keywords: additive manufacturing, laser powder bed fusion, elaboration likelihood model theory, sustainable development goals, education-awareness, France, Qatar, specific energy consumption, environmental impact, lightweight components
Procedia PDF Downloads 911328 Adequacy of Antenatal Care and Its Relationship with Low Birth Weight in Botucatu, São Paulo, Brazil: A Case-Control Study
Authors: Cátia Regina Branco da Fonseca, Maria Wany Louzada Strufaldi, Lídia Raquel de Carvalho, Rosana Fiorini Puccini
Abstract:
Background: Birth weight reflects gestational conditions and development during the fetal period. Low birth weight (LBW) may be associated with antenatal care (ANC) adequacy and quality. The purpose of this study was to analyze ANC adequacy and its relationship with LBW in the Unified Health System in Brazil. Methods: A case-control study was conducted in Botucatu, São Paulo, Brazil, 2004 to 2008. Data were collected from secondary sources (the Live Birth Certificate), and primary sources (the official medical records of pregnant women). The study population consisted of two groups, each with 860 newborns. The case group comprised newborns weighing less than 2,500 grams, while the control group comprised live newborns weighing greater than or equal to 2,500 grams. Adequacy of ANC was evaluated according to three measurements: 1. Adequacy of the number of ANC visits adjusted to gestational age; 2. Modified Kessner Index; and 3. Adequacy of ANC laboratory studies and exams summary measure according to parameters defined by the Ministry of Health in the Program for Prenatal and Birth Care Humanization. Results: Analyses revealed that LBW was associated with the number of ANC visits adjusted to gestational age (OR = 1.78, 95% CI 1.32-2.34) and the ANC laboratory studies and exams summary measure (OR = 4.13, 95% CI 1.36-12.51). According to the modified Kessner Index, 64.4% of antenatal visits in the LBW group were adequate, with no differences between groups. Conclusions: Our data corroborate the association between inadequate number of ANC visits, laboratory studies and exams, and increased risk of LBW newborns. No association was found between the modified Kessner Index as a measure of adequacy of ANC and LBW. This finding reveals the low indices of coverage for basic actions already well regulated in the Health System in Brazil. Despite the association found in the study, we cannot conclude that LBW would be prevented only by an adequate ANC, as LBW is associated with factors of complex and multifactorial etiology. The results could be used to plan monitoring measures and evaluate programs of health care assistance during pregnancy, at delivery and to newborns, focusing on reduced LBW rates.Keywords: low birth weight, antenatal care, prenatal care, adequacy of health care, health evaluation, public health system
Procedia PDF Downloads 4331327 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization
Procedia PDF Downloads 2581326 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 97