Search results for: Lucas Nascimento
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 129

Search results for: Lucas Nascimento

39 A Retrospective Analysis of the Impact of the Choosing Wisely Canada Campaign on Emergency Department Imaging Utilization for Head Injuries

Authors: Sameer Masood, Lucas Chartier

Abstract:

Head injuries are a commonly encountered presentation in emergency departments (ED) and the Choosing Wisely Canada (CWC) campaign was released in June 2015 in an attempt to decrease imaging utilization for patients with minor head injuries. The impact of the CWC campaign on imaging utilization for head injuries has not been explored in the ED setting. In our study, we describe the characteristics of patients with head injuries presenting to a tertiary care academic ED and the impact of the CWC campaign on CT head utilization. This retrospective cohort study used linked databases from the province of Ontario, Canada to assess emergency department visits with a primary diagnosis of head injury made between June 1, 2014 and Aug 31, 2016 at the University Health Network in Toronto, Canada. We examined the number of visits during the study period, the proportion of patients that had a CT head performed before and after the release of the CWC campaign, as well as mode of arrival, and disposition. There were 4,322 qualifying visits at our site during the study period. The median presenting age was 44.12 years (IQR 27.83,67.45), the median GCS was 15 (IQR 15,15) and the majority of patients presenting had intermediate acuity (CTAS 3). Overall, 43.17% of patients arrived via ambulance, 49.24 % of patients received a CT head and 10.46% of patients were admitted. Compared to patients presenting before the CWC campaign release, there was no significant difference in the rate of CT heads after the CWC (50.41% vs 47.68%, P = 0.07). There were also no significant differences between the two groups in mode of arrival (ambulance vs ambulatory) (42.94% vs 43.48%, P = 0.72) or admission rates (9.85% vs 11.26%, P = 0.15). However, more patients belonged to the high acuity groups (CTAS 1 or 2) in the post CWC campaign release group (12.98% vs 8.11% P <0.001). Visits for head injuries make up a significant proportion of total ED visits and approximately half of these patients receive CT imaging in the ED. The CWC campaign did not seem to impact imaging utilization for head injuries in the 14 months following its launch. Further efforts, including local quality improvement initiatives, are likely needed to increase adherence to its recommendation and reduce imaging utilization for head injuries.

Keywords: choosing wisely, emergency department, head injury, quality improvement

Procedia PDF Downloads 225
38 Changing Colours and Odours: Exploring Cues Used by Insect Pollinators in Two Brassicaceous Plants

Authors: Katherine Y. Barragan-Fonseca, Joop J. A. Van Loon, Marcel Dicke, Dani Lucas-Barbosa

Abstract:

Flowering plants use different traits to attract pollinators, which indicate flower location and reward quality. Visual and olfactory cues are among the most important floral traits exploited by pollinating insects. Pollination can alter physical and chemical cues of flowers, which can subsequently influence the behaviour of flower visitors. We investigated the main cues exploited by the syrphid fly Episyrphus balteatus and the butterfly Pieris brassicae when visiting flowers of Brassica nigra and Raphanus sativus plants. We studied post-pollination changes and their effects on the behaviour of flower visitors and flower volatile emission. Preference of pollinators was investigated by offering visual and olfactory cues simultaneously as well as separately in two-choice bioassays. We also assessed whether pollen is used as a cue by pollinating insects. In addition, we studied whether behavioural responses could be correlated with changes in plant volatile emission, by collecting volatiles from flower headspace. P. brassicae and E. balteatus did not use pollen as a cue in either of the two plant species studied. Interestingly, pollinators showed a strong bias for visual cues over olfactory cues when exposed to B. nigra plants. Flower visits by pollinators were influenced by post-pollination changes in B. nigra. In contrast, plant responses to pollination did not influence pollinator preference for R. sativus flowers. These results correlate well with floral volatile emission of B. nigra and R. sativus; pollination influenced the volatile profile of B. nigra flowers but not that of R. sativus. Collectively, our data show that different pollinators exploit different visual and olfactory traits when searching for nectar or pollen of flowers of two close related plant species. Although the syrphid fly consumes mostly pollen from brassicaceous flowers, it cannot detect pollen from a distance and likely associates other flower traits with quantity and quality of pollen.

Keywords: plant volatiles, pollinators, post-pollination changes, visual and odour cues

Procedia PDF Downloads 161
37 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes

Authors: Lucas Paganin, Viliam Makis

Abstract:

With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.

Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart

Procedia PDF Downloads 91
36 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease

Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez

Abstract:

Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).

Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions

Procedia PDF Downloads 335
35 Modified Polysaccharide as Emulsifier in Oil-in-Water Emulsions

Authors: Tatiana Marques Pessanha, Aurora Perez-Gramatges, Regina Sandra Veiga Nascimento

Abstract:

Emulsions are commonly used in applications involving oil/water dispersions, where handling of interfaces becomes a crucial aspect. The use of emulsion technology has greatly evolved in the last decades to suit the most diverse uses, ranging from cosmetic products and biomedical adjuvants to complex industrial fluids. The stability of these emulsions is influenced by factors such as the amount of oil, size of droplets and emulsifiers used. While commercial surfactants are typically used as emulsifiers to reduce interfacial tension, and therefore increase emulsion stability, these organic amphiphilic compounds are often toxic and expensive. A suitable alternative for emulsifiers can be obtained from the chemical modification of polysaccharides. Our group has been working on modification of polysaccharides to be used as additives in a variety of fluid formulations. In particular, we have obtained promising results using chitosan, a natural and biodegradable polymer that can be easily modified due to the presence of amine groups in its chemical structure. In this way, it is possible to increase both the hydrophobic and hydrophilic character, which renders a water-soluble, amphiphilic polymer that can behave as an emulsifier. The aim of this work was the synthesis of chitosan derivatives structurally modified to act as surfactants in stable oil-in-water. The synthesis of chitosan derivatives occurred in two steps, the first being the hydrophobic modification with the insertion of long hydrocarbon chains, while the second step consisted in the cationization of the amino groups. All products were characterized by infrared spectroscopy (FTIR) and carbon magnetic resonance (13C-NMR) to evaluate the cationization and hydrofobization degrees. These modified polysaccharides were used to formulate oil-in water (O:W) emulsions with different oil/water ratios (i.e 25:75, 35:65, 60:40) using mineral paraffinic oil. The formulations were characterized according to the type of emulsion, density and rheology measurements, as well as emulsion stability at high temperatures. All emulsion formulations were stable for at least 30 days, at room temperature (25°C), and in the case of the high oil content emulsion (60:40), the formulation was also stable at temperatures up to 100°C. Emulsion density was in the range of 0.90-0.87 s.g. The rheological study showed a viscoelastic behaviour in all formulations at room temperature, which is in agreement with the high stability showed by the emulsions, since the polymer acts not only reducing interfacial tension, but also forming an elastic membrane at the oil/water interface that guarantees its integrity. The results obtained in this work are a strong evidence of the possibility of using chemically modified polysaccharides as environmentally friendly alternatives to commercial surfactants in the stabilization of oil-in water formulations.

Keywords: emulsion, polymer, polysaccharide, stability, chemical modification

Procedia PDF Downloads 353
34 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 112
33 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 113
32 Strategies For Management Of Massive Intraoperative Airway Haemorrhage Complicating Surgical Pulmonary Embolectomy

Authors: Nicholas Bayfield, Liam Bibo, Kaushelandra Rathore, Lucas Sanders, Mark Newman

Abstract:

INTRODUCTION: Surgical pulmonary embolectomy is an established therapy for acute pulmonary embolism causing right heart dysfunction and haemodynamic instability. Massive intraoperative airway haemorrhage is a rare complication of pulmonary embolectomy. We present our institutional experience with massive airway haemorrhage complicating pulmonary embolectomy and discuss optimal therapeutic strategies. METHODS: A retrospective review of emergent surgical pulmonary embolectomy patients was undertaken. Cases complicated by massive intra-operative airway haemorrhage were identified. Intra- and peri-operative management strategies were analysed and discussed. RESULTS: Of 76 patients undergoing emergent or salvage pulmonary embolectomy, three cases (3.9%) of massive intraoperative airway haemorrhage were identified. Haemorrhage always began on weaning from cardiopulmonary bypass. Successful management strategies involved intraoperative isolation of the side of bleeding, occluding the affected airway with an endobronchial blocker, institution of veno-arterial (VA) extracorporeal membrane oxygenation (ECMO) and reversal of anticoagulation. Running the ECMO without heparinisation allows coagulation to occur. Airway haemorrhage was controlled within 24 hours of operation in all patients, allowing re-institution of dual lung ventilation and decannulation from ECMO. One case in which positive end-expiratory airway pressure was trialled initially was complicated by air embolism. Although airway haemorrhage was controlled successfully in all cases, all patients died in-hospital for reasons unrelated to the airway haemorrhage. CONCLUSION: Massive intraoperative airway haemorrhage during pulmonary embolectomy is a rare complication with potentially catastrophic outcomes. Re-perfusion alveolar and capillary injury is the likely aetiology. With a systematic approach to management, airway haemorrhage can be well controlled intra-operatively and often resolves within 24 hours. Stopping blood flow to the pulmonary arteries and support of oxygenation by the institution of VA ECMO is important. This management has been successful in our 3 cases.

Keywords: pulmonary embolectomy, cardiopulmonary bypass, cardiac surgery, pulmonary embolism

Procedia PDF Downloads 176
31 Rethinking The Residential Paradigm: Regenerative Design and the Contemporary Housing Industry

Authors: Gabriela Lucas Sanchez

Abstract:

The contemporary housing industry is dominated by tract houses, which prioritize uniformity and cost-efficiency over environmental and ecological considerations. However, as the world faces the growing challenges of climate change and resource depletion, there is an urgent need to rethink the residential paradigm. This essay explores how regenerative practices can be integrated into standard residential designs to create a shift that reduces the environmental impact of housing and actively contributes to ecological health. Passive sustainable practices, such as passive solar design, natural ventilation, and the use of energy-efficient materials, aim to maximize resource use efficiency, minimize waste, and create healthy living environments. Regenerative practices, on the other hand, go beyond sustainability to work in harmony with natural systems, actively restoring and enriching the environment. Integrating these two approaches can redefine the residential paradigm, creating homes that reduce harm and positively impact the local ecosystem. The essay begins by exploring the principles and benefits of passive sustainable practices, discussing how they can reduce energy consumption and improve indoor environmental quality in standardized housing. Passive sustainability minimizes energy consumption through strategic design choices, such as optimizing building orientation, utilizing natural ventilation, and incorporating high-performance insulation and glazing. However, while sustainability efforts have been important steps in the right direction, a more holistic, regenerative approach is needed to address the root causes of environmental degradation. Regenerative development and design seek to go beyond simply reducing negative impacts, instead aiming to create built environments that actively contribute to restoring and enhancing natural systems. This shift in perspective is critical, as it recognizes the interdependence between human settlements and the natural world and the potential for buildings to serve as catalysts for positive change.

Keywords: passive sustainability, regenerative architecture, residential architecture, community

Procedia PDF Downloads 35
30 Syntheses of Anionic Poly(urethanes) with Imidazolium, Phosphonium, and Ammonium as Counter-cations and Their Evaluation for CO2 Separation

Authors: Franciele L. Bernard, Felipe Dalla Vecchia, Barbara B. Polesso, Jose A. Donato, Marcus Seferin, Rosane Ligabue, Jailton F. do Nascimento, Sandra Einloft

Abstract:

The increasing level of carbon dioxide concentration in the atmosphere related to fossil fuels processing and utilization are contributing to global warming phenomena considerably. Carbon capture and storage (CCS) technologies appear as one of the key technologies to reduce CO2 emissions mitigating the effects of climate change. Absorption using amines solutions as solvents have been extensively studied and used in industry for decades. However, solvent degradation and equipment corrosion are two of the main problems in this process. Poly (ionic liquid) (PIL) is considered as a promising material for CCS technology, potentially more environmentally friendly and lesser energy demanding than traditional material. PILs possess a unique combination of ionic liquids (ILs) features, such as affinity for CO2, thermal and chemical stability and adjustable properties, coupled with the intrinsic properties of the polymer. This study investigated new Poly (ionic liquid) (PIL) based on polyurethanes with different ionic liquids cations and its potential for CO2 capture. The PILs were synthesized by the addition of diisocyante to a difunctional polyol, followed by an exchange reaction with the ionic Liquids 1-butyl-3-methylimidazolium chloride (BMIM Cl); tetrabutylammonium bromide (TBAB) and tetrabutylphosphonium bromide (TBPB). These materials were characterized by Fourier transform infrared spectroscopy (FTIR), Proton Nuclear Magnetic Resonance (1H-NMR), Atomic force microscopy (AFM), Tensile strength analysis, Field emission scanning electron microscopy (FESEM), Thermogravimetric analysis (TGA), Differential scanning calorimetry (DSC). The PILs CO2 sorption capacity were gravimetrically assessed in a Magnetic Suspension Balance (MSB). It was found that the ionic liquids cation influences in the compounds properties as well as in the CO2 sorption. The best result for CO2 sorption (123 mgCO2/g at 30 bar) was obtained for the PIL (PUPT-TBA). The higher CO2 sorption in PUPT-TBA is probably linked to the fact that the tetraalkylammonium cation having a higher positive density charge can have a stronger interaction with CO2, while the imidazolium charge is delocalized. The comparative CO2 sorption values of the PUPT-TBA with different ionic liquids showed that this material has greater capacity for capturing CO2 when compared to the ILs even at higher temperature. This behavior highlights the importance of this study, as the poly (urethane) based PILs are cheap and versatile materials.

Keywords: capture, CO2, ionic liquids, ionic poly(urethane)

Procedia PDF Downloads 234
29 Comparative Analysis of Costs and Well Drilling Techniques for Water, Geothermal Energy, Oil and Gas Production

Authors: Thales Maluf, Nazem Nascimento

Abstract:

The development of society relies heavily on the total amount of energy obtained and its consumption. Over the years, there has been an advancement on energy attainment, which is directly related to some natural resources and developing systems. Some of these resources should be highlighted for its remarkable presence in world´s energy grid, such as water, petroleum, and gas, while others deserve attention for representing an alternative to diversify the energy grid, like geothermal sources. Therefore, because all these resources can be extracted from the underground, drilling wells is a mandatory activity in terms of exploration, and it involves a previous geological study and an adequate preparation. It also involves a cleaning process and an extraction process that can be executed by different procedures. For that reason, this research aims the enhancement of exploration processes through a comparative analysis of drilling costs and techniques used to produce them. The analysis itself is based on a bibliographical review based on books, scientific papers, schoolwork and mainly explore drilling methods and technologies, equipment used, well measurements, extraction methods, and production costs. Besides techniques and costs regarding the drilling processes, some properties and general characteristics of these sources are also compared. Preliminary studies show that there are some major differences regarding the exploration processes, mostly because these resources are naturally distinct. Water wells, for instance, have hundreds of meters of length because water is stored close to the surface, while oil, gas, and geothermal production wells can reach thousands of meters, which make them more expensive to be drilled. The drilling methods present some general similarities especially regarding the main mechanism of perforation, but since water is a resource stored closer to the surface than the other ones, there is a wider variety of methods. Water wells can be drilled by rotary mechanisms, percussion mechanisms, rotary-percussion mechanisms, and some other simpler methods. Oil and gas production wells, on the other hand, require rotary or rotary-percussion drilling with a proper structure called drill rig and resistant materials for the drill bits and the other components, mostly because they´re stored in sedimentary basins that can be located thousands of meters under the ground. Geothermal production wells also require rotary or rotary-percussion drilling and require the existence of an injection well and an extraction well. The exploration efficiency also depends on the permeability of the soil, and that is why it has been developed the Enhanced Geothermal Systems (EGS). Throughout this review study, it can be verified that the analysis of the extraction processes of energy resources is essential since these resources are responsible for society development. Furthermore, the comparative analysis of costs and well drilling techniques for water, geothermal energy, oil, and gas production, which is the main goal of this research, can enable the growth of energy generation field through the emergence of ideas that improve the efficiency of energy generation processes.

Keywords: drilling, water, oil, Gas, geothermal energy

Procedia PDF Downloads 145
28 Library Outreach After COVID: Making the Case for In-Person Library Visits

Authors: Lucas Berrini

Abstract:

Academic libraries have always struggled with engaging with students and faculty. Striking the balance between what the community needs and what the library can afford has also been a point of contention for libraries. As academia begins to return to a new normal after COVID, library staff are rethinking how remind patrons that the library is open and ready for business. NC Wesleyan, a small liberal arts school in eastern North Carolina, decided to be proactive and reach out to the academic community. After shutting down in 2020 for COVID, the campus library saw a marked decrease in in-person attendance. For a small school whose operational budget was tied directly to tuition payments, it was imperative for the library to remind faculty and staff that they were open for business. At the beginning of the Summer 2022 term and continuing into the fall, the reference team created a marketing plan using email, physical meetings, and virtual events targeted at students and faculty as well as community members who utilized the facilities prior to COVID. The email blasts were gentle reminders that the building was open and available for use The target audiences were the community at large. Several of the emails contained reminders of previous events in the library that were student centered. The next phase of the email campaign centers on reminding the community about the libraries physical and electronic resources, including the makerspace lab. Language will indicate that student voices are needed, and a QR code is included for students to leave feedback as to what they want to see in the library. The final phase of the email blasts were faculty focused and invited them to connect with library reference staff for an in-person consultation on their research needs. While this phase is ongoing, the response has been positive, and staff are compiling data in hopes of working with administration to implement some of the requested services and materials. These email blasts will be followed up by in-person meetings with faculty and students who responded to the QR codes. This research is ongoing. This type of targeted outreach is new for Wesleyan. It is the hope of the library that by the end of Fall 2022, there will be a plan in place to address the needs and concerns of the students and faculty. Furthermore, the staff hopes to create a new sense of community for the students and staff of the university.

Keywords: academic, education, libraries, outreach

Procedia PDF Downloads 94
27 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration

Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas

Abstract:

Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.

Keywords: dough, experimental, numerical, rupture

Procedia PDF Downloads 122
26 Recovering Copper From Tailing and E-Waste to Create Copper Nanoparticles with Antimicrobial Properties

Authors: Erico R. Carmona, Lucas Hernandez-Saravia, Aliro Villacorta, Felipe Carevic

Abstract:

Tailings and electronic waste (e-waste) are an important source of global contamination. Chile is one of Organisation for Economic Co-operation and Development (OECD) member countries that least recycled this kind of industrial waste, reaching only 3% of the total. Tailings and e-waste recycling offers a valuable tool to minimize the increasing accumulation of waste, supplement the scarcity of some raw materials and to obtain economic benefits through the commercialization of these. It should be noted that this type of industrial waste is an important source of valuable metals, such as copper, which allow generating new business and added value through its transformation into new materials with advanced physical and biological properties. In this sense, the development of nanotechnology has led to the creation of nanomaterials with multiple applications given their unique physicochemical properties. Among others, copper nanoparticles (CuNPs) have gained great interest due to their optical, catalytic, conductive properties, and particularly because of their broad-spectrum antimicrobial activity. There are different synthesis methods of copper nanoparticles; however, green synthesis is one of the most promising methodologies, since it is simple, low-cost, ecological, and generates stable nanoparticles, which makes it a promising methodology for scaling up. Currently, there are few initiatives that involve the development of methods for the recovery and transformation of copper from waste to produce nanoparticles with new properties and better technological benefits. Thus, the objective of this work is to show preliminary data about the develop a sustainable transformation process of tailings and e-waste that allows obtaining a copper-based nanotechnological product with potential antimicrobial applications. For this, samples of tailings and e-waste collected from Tarapacá and Antofagasta region of northern Chile were used to recover copper through efficient, ecological, and low-cost alkaline hydrometallurgical treatments, which to allow obtaining copper with a high degree of purity. On the other hand, the transformation process from recycled copper to a nanomaterial was carried out through a green synthesis approach by using vegetal organic residue extracts that allows obtaining CuNPs following methodologies previously reported by authors. Initial physical characterization with UV-Vis, FTIR, AFM, and TEM methodologies will be reported for CuNPs synthesized.

Keywords: nanomaterials, industrial waste, chile, recycling

Procedia PDF Downloads 96
25 Inertial Motion Capture System for Biomechanical Analysis in Rehabilitation and Sports

Authors: Mario Sandro F. Rocha, Carlos S. Ande, Anderson A. Oliveira, Felipe M. Bersotti, Lucas O. Venzel

Abstract:

The inertial motion capture systems (mocap) are among the most suitable tools for quantitative clinical analysis in rehabilitation and sports medicine. The inertial measuring units (IMUs), composed by accelerometers, gyroscopes, and magnetometers, are able to measure spatial orientations and calculate displacements with sufficient precision for applications in biomechanical analysis of movement. Furthermore, this type of system is relatively affordable and has the advantages of portability and independence from external references. In this work, we present the last version of our inertial motion capture system, based on the foregoing technology, with a unity interface designed for rehabilitation and sports. In our hardware architecture, only one serial port is required. First, the board client must be connected to the computer by a USB cable. Next, an available serial port is configured and opened to establish the communication between the client and the application, and then the client starts scanning for the active MOCAP_S servers around. The servers play the role of the inertial measuring units that capture the movements of the body and send the data to the client, which in turn create a package composed by the ID of the server, the current timestamp, and the motion capture data defined in the client pre-configuration of the capture session. In the current version, we can measure the game rotation vector (grv) and linear acceleration (lacc), and we also have a step detector that can be abled or disabled. The grv data are processed and directly linked to the bones of the 3D model, and, along with the data of lacc and step detector, they are also used to perform the calculations of displacements and other variables shown on the graphical user interface. Our user interface was designed to calculate and present variables that are important for rehabilitation and sports, such as cadence, speed, total gait cycle, gait cycle length, obliquity and rotation, and center of gravity displacement. Our goal is to present a low-cost portable and wearable system with a friendly interface for application in biomechanics and sports, which also performs as a product of high precision and low consumption of energy.

Keywords: biomechanics, inertial sensors, motion capture, rehabilitation

Procedia PDF Downloads 140
24 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 166
23 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones

Authors: Lucas Caldas, Pablo Paulse, Karla Hora

Abstract:

Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.

Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance

Procedia PDF Downloads 174
22 N-Glycosylation in the Green Microalgae Chlamydomonas reinhardtii

Authors: Pierre-Louis Lucas, Corinne Loutelier-Bourhis, Narimane Mati-Baouche, Philippe Chan Tchi-Song, Patrice Lerouge, Elodie Mathieu-Rivet, Muriel Bardor

Abstract:

N-glycosylation is a post-translational modification taking place in the Endoplasmic Reticulum and the Golgi apparatus where defined glycan features are added on protein in a very specific sequence Asn-X-Thr/Ser/Cys were X can be any amino acid except proline. Because it is well-established that those N-glycans play a critical role in protein biological activity, protein half-life and that a different N-glycan structure may induce an immune response, they are very important in Biopharmaceuticals which are mainly glycoproteins bearing N-glycans. From now, most of the biopharmaceuticals are produced by mammalian cells like Chinese Hamster Ovary cells (CHO) for their N-glycosylation similar to the human, but due to the high production costs, several other species are investigated as the possible alternative system. In this purpose, the green microalgae Chlamydomonas reinhardtii was investigated as the potential production system for Biopharmaceuticals. This choice was influenced by the facts that C. reinhardtii is a well-study microalgae which is growing fast with a lot of molecular biology tools available. This organism is also producing N-glycan on its endogenous proteins. However, the analysis of the N-glycan structure of this microalgae has revealed some differences as compared to the human. Rather than in Human where the glycans are processed by key enzymes called N-acetylglucosaminyltransferase I and II (GnTI and GnTII) adding GlcNAc residue to form a GlcNAc₂Man₃GlcNAc₂ core N-glycan, C. reinhardtii lacks those two enzymes and possess a GnTI independent glycosylation pathway. Moreover, some enzymes like xylosyltransferases and methyltransferases not present in human are supposed to act on the glycans of C. reinhardtii. Furthermore, the recent structural study by mass spectrometry shows that the N-glycosylation precursor supposed to be conserved in almost all eukaryotic cells results in a linear Man₅GlcNAc₂ rather than a branched one in C. reinhardtii. In this work, we will discuss the new released MS information upon C. reinhardtii N-glycan structure and their impact on our attempt to modify the glycan in a Human manner. Two strategies will be discussed. The first one consisted in the study of Xylosyltransferase insertional mutants from the CLIP library in order to remove xyloses from the N-glycans. The second will go further in the humanization by transforming the microalgae with the exogenous gene from Toxoplasma gondii having an activity similar to GnTI and GnTII with the aim to synthesize GlcNAc₂Man₃GlcNAc₂ in C. reinhardtii.

Keywords: Chlamydomonas reinhardtii, N-glycosylation, glycosyltransferase, mass spectrometry, humanization

Procedia PDF Downloads 177
21 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation

Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel

Abstract:

Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.

Keywords: factorial experimental design, oil production, simulation, three-phase separator

Procedia PDF Downloads 286
20 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 98
19 Photovoltaic-Driven Thermochemical Storage for Cooling Applications to Be Integrated in Polynesian Microgrids: Concept and Efficiency Study

Authors: Franco Ferrucci, Driss Stitou, Pascal Ortega, Franck Lucas

Abstract:

The energy situation in tropical insular regions, as found in the French Polynesian islands, presents a number of challenges, such as high dependence on imported fuel, high transport costs from the mainland and weak electricity grids. Alternatively, these regions have a variety of renewable energy resources, which favor the exploitation of smart microgrids and energy storage technologies. With regards to the electrical energy demand, the high temperatures in these regions during the entire year implies that a large proportion of consumption is used for cooling buildings, even during the evening hours. In this context, this paper presents an air conditioning system driven by photovoltaic (PV) electricity that combines a refrigeration system and a thermochemical storage process. Thermochemical processes are able to store energy in the form of chemical potential with virtually no losses, and this energy can be used to produce cooling during the evening hours without the need to run a compressor (thus no electricity is required). Such storage processes implement thermochemical reactors in which a reversible chemical reaction between a solid compound and a gas takes place. The solid/gas pair used in this study is BaCl2 reacting with ammonia (NH3), which is also the coolant fluid in the refrigeration circuit. In the proposed system, the PV-driven electric compressor is used during the daytime either to run the refrigeration circuit when a cooling demand occurs or to decompose the ammonia-charged salt and remove the gas from thermochemical reactor when no cooling is needed. During the evening, when there is no electricity from solar source, the system changes its configuration and the reactor reabsorbs the ammonia gas from the evaporator and produces the cooling effect. In comparison to classical PV-driven air conditioning units equipped with electrochemical batteries (e.g. Pb, Li-ion), the proposed system has the advantage of having a novel storage technology with a much longer charge/discharge life cycle, and no self-discharge. It also allows a continuous operation of the electric compressor during the daytime, thus avoiding the problems associated with the on-off cycling. This work focuses on the system concept and on the efficiency study of its main components. It also compares the thermochemical with electrochemical storage as well as with other forms of thermal storage, such as latent (ice) and sensible heat (chilled water). The preliminary results show that the system seems to be a promising alternative to simultaneously fulfill cooling and energy storage needs in tropical insular regions.

Keywords: microgrid, solar air-conditioning, solid/gas sorption, thermochemical storage, tropical and insular regions

Procedia PDF Downloads 241
18 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 232
17 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 175
16 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 153
15 The Mental Health Policy in the State of EspíRito Santo, Brazil: Judicialization

Authors: Fabiola Xavier Leal, Lara Campanharo, Sueli Aparecida Rodrigues Lucas

Abstract:

The phenomenon of judicialization in health policy brings with it a great deal of problematization, but in general, it means that some issues that were previously solved by traditional political bodies are being decided by the Judiciary bodies. It is, therefore, a controversial topic that has generated many reflections both in the academic and political fields, considering that not only a dispute of public funds is at stake, but also the debate on access to social rights provided for in the Brazilian Federal Constitution of 1988 and in the various public policies, such as healthcare. With regard to the phenomenon in the Mental Health Policy focusing on people who use drugs, the disputes that permeate this scenario are evident: moral, cultural, sanitary, economic, psychological aspects. There are also the individual and collective dimensions of suffering. And in this process, we all question: What is the role of the Brazilian State in this matter? In this context, another question that needs to be answered is the amount spent on this procedure in the state of Espírito Santo (ES), Brazil (in the last 04 years, around R$121,978,591.44 were paid only for compulsory hospitalization of individuals) in the field in question, which is the financing of the services of the Psychosocial Care Network (RAPS). Therefore, this article aims to problematize the phenomenon of judicialization in Mental Health Policy through the compulsory hospitalization of people who use drugs in Espírito Santo (ES). We proposed a study that sought to understand how this has been occurring and making an impact on the provision of RAPS services in the Espírito Santo scenario. Therefore, the general objective of this study is to analyze the expenses with compulsory hospitalizations for drug use carried out by the State Health Department (SESA) between 2014 and 2019, in which we will seek to identify its destination and the impact of these actions on public health policy. For the purposes of this article, we will present the preliminary data of this study, such as the amount spent by the state and the receiving institutions. For data collection, the following data sources were used: documents available publicly on the Transparency Portal (payments made per year, institutions that received, subjects hospitalized, period and the amount of the daily rates paid); as well as the processes generated by SESA through its own system - ONBASE. For qualitative analysis, content analysis was used; and for quantitative analysis, descriptive statistics was used. Thus, we seek to problematize the issue of judicialization for compulsory hospitalizations, considering the current situation in which this resource has been widely requested to legitimize the war on drugs. This scenario highlights the moral-legal discourse, pointing out strategies through the control of bodies and through faith as an alternative.

Keywords: compulsory hospitalization, drugs, judicialization, mental health

Procedia PDF Downloads 170
14 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends

Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe

Abstract:

Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.

Keywords: port, Miami, network, cargo, cruise

Procedia PDF Downloads 79
13 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
12 Three Dimensional Computational Fluid Dynamics Simulation of Wall Condensation inside Inclined Tubes

Authors: Amirhosein Moonesi Shabestary, Eckhard Krepper, Dirk Lucas

Abstract:

The current PhD project comprises CFD-modeling and simulation of condensation and heat transfer inside horizontal pipes. Condensation plays an important role in emergency cooling systems of reactors. The emergency cooling system consists of inclined horizontal pipes which are immersed in a tank of subcooled water. In the case of an accident the water level in the core is decreasing, steam comes in the emergency pipes, and due to the subcooled water around the pipe, this steam will start to condense. These horizontal pipes act as a strong heat sink which is responsible for a quick depressurization of the reactor core when any accident happens. This project is defined in order to model all these processes which happening in the emergency cooling systems. The most focus of the project is on detection of different morphologies such as annular flow, stratified flow, slug flow and plug flow. This project is an ongoing project which has been started 1 year ago in Helmholtz Zentrum Dresden Rossendorf (HZDR), Fluid Dynamics department. In HZDR most in cooperation with ANSYS different models are developed for modeling multiphase flows. Inhomogeneous MUSIG model considers the bubble size distribution and is used for modeling small-scaled dispersed gas phase. AIAD (Algebraic Interfacial Area Density Model) is developed for detection of the local morphology and corresponding switch between them. The recent model is GENTOP combines both concepts. GENTOP is able to simulate co-existing large-scaled (continuous) and small-scaled (polydispersed) structures. All these models are validated for adiabatic cases without any phase change. Therefore, the start point of the current PhD project is using the available models and trying to integrate phase transition and wall condensing models into them. In order to simplify the idea of condensation inside horizontal tubes, 3 steps have been defined. The first step is the investigation of condensation inside a horizontal tube by considering only direct contact condensation (DCC) and neglect wall condensation. Therefore, the inlet of the pipe is considered to be annular flow. In this step, AIAD model is used in order to detect the interface. The second step is the extension of the model to consider wall condensation as well which is closer to the reality. In this step, the inlet is pure steam, and due to the wall condensation, a liquid film occurs near the wall which leads to annular flow. The last step will be modeling of different morphologies which are occurring inside the tube during the condensation via using GENTOP model. By using GENTOP, the dispersed phase is able to be considered and simulated. Finally, the results of the simulations will be validated by experimental data which will be available also in HZDR.

Keywords: wall condensation, direct contact condensation, AIAD model, morphology detection

Procedia PDF Downloads 304
11 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 131
10 Stability in Slopes Related to Expansive Soils

Authors: Ivelise M. Strozberg, Lucas O. Vale, Maria V. V. Morais

Abstract:

Expansive soils are characterized by their significant volumetric variations, tending to suffer an increase of this volume when added water in their voids and a decrease of volume when this water is removed. The parameters of resistance (especially the angle of friction, cohesion and specific weight) of expansive or non-expansive soils of the same field present differences, as found in laboratory tests. What is expected is that, through this research, demonstrate that this variation directly affects the results of the calculation of factors of safety for slope stability. The expansibility due to specific clay minerals such as montmorillonites and vermiculites is the most common form of expansion of soils or rocks, causing expansion pressures. These pressures can become an aggravating problem in regions across the globe that, when not previously studied, may present high risks to the enterprise, such as cracks, fissures, movements in structures, breaking of retaining walls, drilling of wells, among others. The study provides results based on analyzes carried out in the Slide 2018 software belonging to the Rocsience group, where the software is a two-dimensional equilibrium slope stability program that calculates the factor of safety or probability of failure of certain surfaces composed of soils or rocks (or both, depending on the situation), - through the methods of: Bishop simplified, Fellenius and Janbu corrected. This research compares the factors of safety of a homogeneous earthfill dam geometry, analysed for operation and end-of-construction situations, having a height of approximately 35 meters, with a slope of 1.5: 1 in the slope downstream and 2: 1 on the upstream slope. As the water level is 32.73m high and the water table is drawn automatically by the Slide program using the finite element method for the operating situation, considering two hypotheses for the use of materials - the first with soils with characteristics of expansion and the second with soils without expansibility. For this purpose, soil samples were collected from the region of São Bento do Una - Pernambuco, Brazil and taken to the soil mechanics laboratory to characterize and determine the percentage of expansibility. There were found 2 types of soils in that area: 1 site of expansive soils (8%) and another with non- expansive ones. Based on the results found, the analysis of the values of factors of safety indicated, both upstream and downstream slopes, the highest values were obtained in the case where there is no presence of materials with expansibility resulting, for one of the situations, values of 1.353 (Fellenius), 1,295 (Janbu corrected) and 1,409 (Bishop simplified). There is a considerable drop in safety factors in cases where soils are potentially expansive, resulting in values for the same situation of 0.859 (Fellenius), 0.809 (Janbu corrected) and 0.842 (Bishop simplified), in the case of higher expansibility (8 %). This shows that the expansibility is a determinant factor in the fall of resistance of soil, determined by the factors of cohesion and angle of friction.

Keywords: dam. slope. software. swelling soil

Procedia PDF Downloads 122