Search results for: Computational Fluid Dynamics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5553

Search results for: Computational Fluid Dynamics

213 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 169
212 Severe Post Operative Gas Gangrene of the Liver: Off-Label Treatment by Percutaneous Radiofrequency Ablation

Authors: Luciano Tarantino

Abstract:

Gas gangrene is a rare, severe infection with a very high mortality rate caused by Clostridium species. The infection causes a non-suppurative localized producing gas lesion from which harmful toxins that impair the inflammatory response cause vessel damage and multiple organ failure. Gas gangrene of the liver is very rare and develops suddenly, often as a complication of abdominal surgery and liver transplantation. The present paper deals with a case of gas gangrene of the liver that occurred after percutaneous MW ablation of hepatocellular carcinoma, resulting in progressive liver necrosis and multi-organ failure in spite of specific antibiotics administration. The patient was successfully treated with percutaneous Radiofrequency ablation. Case report: Female, 76 years old, Child A class cirrhosis, treated with synchronous insertion of 3 MW antennae for large HCC (5.5 cm) in the VIII segment. 24 hours after treatment, the patient was asymptomatic and left the hospital . 2 days later, she complained of fever, weakness, abdominal swelling, and pain. Abdominal US detected a 2.3 cm in size gas-containing area, eccentric within the large (7 cm) ablated area. The patient was promptly hospitalized with the diagnosis of anaerobic liver abscess and started antibiotic therapy with Imipenem/cilastatine+metronidazole+teicoplanine. On the fourth day, the patient was moved to the ICU because of dyspnea, congestive heart failure, atrial fibrillation, right pleural effusion, ascites, and renal failure. Blood tests demonstrated severe leukopenia and neutropenia, anemia, increased creatinine and blood nitrogen, high-level FDP, and high INR. Blood cultures were negative. At US, unenhanced CT, and CEUS, a progressive enlargement of the infected liver lesion was observed. Percutaneous drainage was attempted, but only drops of non-suppurative brownish material could be obtained. Pleural and peritoneal drainages gave serosanguineous muddy fluid. The Surgeon and the Anesthesiologist excluded any indication of surgical resection because of the high perioperative mortality risk. Therefore, we asked for the informed consent of the patient and her relatives to treat the gangrenous liver lesion by percutaneous Ablation. Under conscious sedation, percutaneous RFA of GG was performed by double insertion of 3 cool-tip needles (Covidien LDT, USA ) into the infected area. The procedure was well tolerated by the patient. A dramatic improvement in the patient's condition was observed in the subsequent 24 hours and thereafter. Fever and dyspnea disappeared. Normalization of blood tests, including creatinine, was observed within 4 days. Heart performance improved, 10 days after the RFA the patient left the hospital and was followed-up with weekly as an outpatient for 2 months and every two months thereafter. At 18 months follow-up, the patient is well compensated (Child-Pugh class B7), without any peritoneal or pleural effusion and without any HCC recurrence at imaging (US every 3 months, CT every 6 months). Percutaneous RFA could be a valuable therapy of focal GG of the liver in patients non-responder to antibiotics and when surgery and liver transplantation are not feasible. A fast and early indication is needed in case of rapid worsening of patient's conditions.

Keywords: liver tumor ablation, interventional ultrasound, liver infection, gas gangrene, radiofrequency ablation

Procedia PDF Downloads 70
211 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties

Authors: E. Salem

Abstract:

Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.

Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames

Procedia PDF Downloads 109
210 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms

Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto

Abstract:

In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.

Keywords: capacity-booking, SPA, monthly production planning, linear programming

Procedia PDF Downloads 512
209 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 228
208 Classical Music Unplugged: The Future of Classical Music Performance: Tradition, Technology, and Audience Engagement

Authors: Orit Wolf

Abstract:

Classical music performance is undergoing a profound transformation, marked by a confluence of technological advancements and evolving cultural dynamics. This academic paper explores the multifaceted changes and challenges faced by classical music performance, considering the impact of artificial intelligence (AI) along with other vital factors shaping this evolution. In the contemporary era, classical music is experiencing shifts in performance practices. This paper delves into these changes, emphasizing the need for adaptability within the classical music world. From repertoire selection and concert formats to artistic expression, performers and institutions navigate a delicate balance between tradition and innovation. We explore how these changes impact the authenticity and vitality of classical music performances. Furthermore, the influence of AI in the classical music concert world cannot be underestimated. AI technologies are making inroads into various aspects, from composition assistance to rehearsal and live performances. This paper examines the transformative effects of AI, considering how it enhances precision, adaptability, and creative exploration for musicians. We explore the implications for composers, performers, and the overall concert experience while addressing ethical concerns and creative opportunities. In addition to AI, there is the importance of cross-genre interactions within the classical music sphere. Mash-ups and collaborations with artists from diverse musical backgrounds are redefining the boundaries of classical music and creating works that resonate with a wider and more diverse audience. The benefits of cross-pollination in classical music seem crucial, offering a fresh perspective to listeners. As an active concert artist, Orit Wolf will share how the expectations of classical music audiences are evolving. Modern concertgoers seek not only exceptional musical performances but also immersive experiences that may involve technology, multimedia, and interactive elements. This paper examines how classical musicians and institutions are adapting to these changing expectations, using technology and innovative concert formats to deliver a unique and enriched experience to their audiences. As these changes and challenges reshape the classical music world, the need for a harmonious coexistence of tradition, technology, and innovation becomes evident. Musicians, composers, and institutions are striving to find a balance that ensures classical music remains relevant in a rapidly changing cultural landscape while maintaining the value it brings to compositions and audiences. This paper, therefore, aims to explore the evolving trends in classical music performance. It considers the influence of AI as one element within the broader context of change, highlighting the necessity of adaptability, cross-genre interactions, and a response to evolving audience expectations. By doing so, the classical music world can navigate this transformative period while preserving its timeless traditions and adding value to both performers and listeners. Orit Wolf, an international concert pianist, fulfils her vision to bring this music in new ways to mass audiences and will share her personal and professional experience as an artist who goes on stage and makes disruptive concerts.

Keywords: cross culture collaboration, music performance and ai, classical music in the digital age, classical concerts, innovation and technology, performance innovation, audience engagement in classical concerts

Procedia PDF Downloads 51
207 Effectiveness of Simulation Resuscitation Training to Improve Self-Efficacy of Physicians and Nurses at Aga Khan University Hospital in Advanced Cardiac Life Support Courses Quasi-Experimental Study Design

Authors: Salima R. Rajwani, Tazeen Ali, Rubina Barolia, Yasmin Parpio, Nasreen Alwani, Salima B. Virani

Abstract:

Introduction: Nurses and physicians have a critical role in initiating lifesaving interventions during cardiac arrest. It is important that timely delivery of high quality Cardio Pulmonary Resuscitation (CPR) with advanced resuscitation skills and management of cardiac arrhythmias is a key dimension of code during cardiac arrest. It will decrease the chances of patient survival if the healthcare professionals are unable to initiate CPR timely. Moreover, traditional training will not prepare physicians and nurses at a competent level and their knowledge level declines over a period of time. In this regard, simulation training has been proven to be effective in promoting resuscitation skills. Simulation teaching learning strategy improves knowledge level, and skills performance during resuscitation through experiential learning without compromising patient safety in real clinical situations. The purpose of the study is to evaluate the effectiveness of simulation training in Advanced Cardiac Life Support Courses by using the selfefficacy tool. Methods: The study design is a quantitative research design and non-randomized quasi-experimental study design. The study examined the effectiveness of simulation through self-efficacy in two instructional methods; one is Medium Fidelity Simulation (MFS) and second is Traditional Training Method (TTM). The sample size was 220. Data was compiled by using the SPSS tool. The standardized simulation based training increases self-efficacy, knowledge, and skills and improves the management of patients in actual resuscitation. Results: 153 students participated in study; CG: n = 77 and EG: n = 77. The comparison was done between arms in pre and post-test. (F value was 1.69, p value is <0.195 and df was 1). There was no significant difference between arms in the pre and post-test. The interaction between arms was observed and there was no significant difference in interaction between arms in the pre and post-test. (F value was 0.298, p value is <0.586 and df is 1. However, the results showed self-efficacy scores were significantly higher within experimental group in post-test in advanced cardiac life support resuscitation courses as compared to Traditional Training Method (TTM) and had overall (p <0.0001) and F value was 143.316 (mean score was 45.01 and SD was 9.29) verses pre-test result showed (mean score was 31.15 and SD was 12.76) as compared to TTM in post-test (mean score was 29.68 and SD was 14.12) verses pre-test result showed (mean score was 42.33 and SD was 11.39). Conclusion: The standardized simulation-based training was conducted in the safe learning environment in Advanced Cardiac Life Suport Courses and physicians and nurses benefited from self-confidence, early identification of life-threatening scenarios, early initiation of CPR, and provides high-quality CPR, timely administration of medication and defibrillation, appropriate airway management, rhythm analysis and interpretation, and Return of Spontaneous Circulation (ROSC), team dynamics, debriefing, and teaching and learning strategies that will improve the patient survival in actual resuscitation.

Keywords: advanced cardiac life support, cardio pulmonary resuscitation, return of spontaneous circulation, simulation

Procedia PDF Downloads 72
206 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics

Authors: M. Khorshed Alam, H. Takaba

Abstract:

The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.

Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo

Procedia PDF Downloads 183
205 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance

Authors: Soheila Sadeghi

Abstract:

Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.

Keywords: contract variation impact, quantitative analysis, project performance, claims analysis

Procedia PDF Downloads 22
204 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 311
203 Against the Philosophical-Scientific Racial Project of Biologizing Race

Authors: Anthony F. Peressini

Abstract:

The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.

Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction

Procedia PDF Downloads 260
202 Designing Metal Organic Frameworks for Sustainable CO₂ Utilization

Authors: Matthew E. Potter, Daniel J. Stewart, Lindsay M. Armstrong, Pier J. A. Sazio, Robert R. Raja

Abstract:

Rising CO₂ levels in the atmosphere means that CO₂ is a highly desirable feedstock. This requires specific catalysts to be designed to activate this inert molecule, combining a catalytic site tailored for CO₂ transformations with a support that can readily adsorb CO₂. Metal organic frameworks (MOFs) are regularly used as CO₂ sorbents. The organic nature of the linker molecules, connecting the metal nodes, offers many post-synthesis modifications to introduce catalytic active sites into the frameworks. However, the metal nodes may be coordinatively unsaturated, allowing them to bind to organic moieties. Imidazoles have shown promise catalyzing the formation of cyclic carbonates from epoxides with CO₂. Typically, this synthesis route employs toxic reagents such as phosgene, liberating HCl. Therefore an alternative route with CO₂ is highly appealing. In this work we design active sites for CO₂ activation, by tethering substituted-imidazole organocatalytic species to the available Cr3+ metal nodes of a Cr-MIL-101 MOF, for the first time, to create a tailored species for carbon capture utilization applications. Our tailored design strategy combining a CO₂ sorbent, Cr-MIL-101, with an anchored imidazole results in a highly active and selective multifunctional catalyst, achieving turnover frequencies of over 750 hr-1. These findings demonstrate the synergy between the MOF framework and imidazoles for CO₂ utilization applications. Further, the effect of substrate variation has been explored yielding mechanistic insights into this process. Through characterization, we show that the structural and compositional integrity of the Cr-MIL-101 has been preserved on functionalizing the imidazoles. Further, we show the binding of the imidazoles to the Cr3+ metal nodes. This can be seen through our EPR study, where the distortion of the Cr3+ on binding to the imidazole shows the CO₂ binding site is close to the active imidazole. This has a synergistic effect, improving catalytic performance. We believe the combination of MOF support and organocatalyst allows many possibilities to generate new multifunctional catalysts for CO₂ utilisation. In conclusion, we have validated our design procedure, combining a known CO₂ sorbent, with an active imidazole species to create a unique tailored multifunctional catalyst for CO₂ utilization. This species achieves high activity and selectivity for the formation of cyclic carbonates and offers a sustainable alternative to traditional synthesis methods. This work represents a unique design strategy for CO₂ utilization while offering exciting possibilities for further work in characterization, computational modelling, and post-synthesis modification.

Keywords: carbonate, catalysis, MOF, utilisation

Procedia PDF Downloads 170
201 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model

Authors: Yew Mun Yip, Dawei Zhang

Abstract:

Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.

Keywords: hydrogen bond, polarization effect, protein folding, PSBC

Procedia PDF Downloads 257
200 An Acyclic Zincgermylene: Rapid H₂ Activation

Authors: Martin Juckel

Abstract:

Probably no other field of inorganic chemistry has undergone such a rapid development in the past two decades than the low oxidation state chemistry of main group elements. This rapid development has only been possible by the development of new bulky ligands. In case of our research group, super-bulky monodentate amido ligands and β-diketiminate ligands have been used to a great success. We first synthesized the unprecedented magnesium(I) dimer [ᴹᵉˢNacnacMg]₂ (ᴹᵉˢNacnac = [(ᴹᵉˢNCMe)₂CH]-; Mes = mesityl, which has since been used both as reducing agent and also for the synthesis of new metal-magnesium bonds. In case of the zinc bromide precursor [L*ZnBr] (L*=(N(Ar*)(SiPri₃); (Ar* = C₆H₂{C(H)Ph₂}₂Me-2,6,4, the reduction with [ᴹᵉˢNacnacMg]₂ led to such a metal-magnesium bond. This [L*ZnMg(ᴹᵉˢNacnac)] compound can be seen as an ‘inorganic Grignard reagent’, which can be used to transfer the metal fragment onto other functional groups or other metal centers; just like the conventional Grignard reagent. By simple addition of (TBoN)GeCl (TBoN = N(SiMe₃){B(DipNCH)₂) to the aforesaid compound, we were able to transfer the amido-zinc fragment to the Ge center of the germylene starting material and to synthesize the first example of a germanium(II)-zinc bond: [:Ge(TBoN)(ZnL*)]. While these reactions typically led to complex product mixture, [:Ge(TBoN)(ZnL*)] could be isolated as dark blue crystals in a good yield. This new compound shows interesting reactivity towards small molecules, especially dihydrogen gas. This is of special interest as dihydrogen is one of the more difficult small molecules to activate, due to its strong (BDE = 108 kcal/mol) and non-polar bond. In this context, the interaction between H₂ σ-bond with the tetrelylene p-Orbital (LUMO), with concomitant donation of the tetrelylene lone pair (HOMO) into the H₂ σ* orbital are responsible for the activation of dihydrogen gas. Accordingly, the narrower the HOMO-LUMO gap of tertelylene, the more reactivity towards H₂ it typically is. The aim of a narrow HOMO-LUMO gap was reached by transferring electropositive substituents respectively metal substituents with relatively low Pauling electronegativity (zinc: 1.65) onto the Ge center (here: the zinc-amido fragment). In consideration of the unprecedented reactivity of [:Ge(TBoN)(ZnL*)], a computational examination of its frontier orbital energies was undertaken. The energy separation between the HOMO, which has significant Ge lone pair character, and the LUMO, which has predominantly Ge p-orbital character, is narrow (40.8 kcal/mol; cf.∆S-T= 24.8 kcal/mol), and comparable to the HOMO-LUMO gaps calculated for other literature known complexes). The calculated very narrow HOMO-LUMO gap for the [:Ge(TBoN)(ZnL*)] complex is consistent with its high reactivity, and is remarkable considering that it incorporates a π-basic amide ligand, which are known to raise the LUMO of germylenes considerably.

Keywords: activation of dihydrogen gas, narrow HOMO-LUMO gap, first germanium(II)-zinc bond, inorganic Grignard reagent

Procedia PDF Downloads 176
199 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Authors: Pedro Llanos, Diego García

Abstract:

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts

Procedia PDF Downloads 118
198 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 102
197 Balancing Biodiversity and Agriculture: A Broad-Scale Analysis of the Land Sparing/Land Sharing Trade-Off for South African Birds

Authors: Chevonne Reynolds, Res Altwegg, Andrew Balmford, Claire N. Spottiswoode

Abstract:

Modern agriculture has revolutionised the planet’s capacity to support humans, yet has simultaneously had a greater negative impact on biodiversity than any other human activity. Balancing the demand for food with the conservation of biodiversity is one of the most pressing issues of our time. Biodiversity-friendly farming (‘land sharing’), or alternatively, separation of conservation and production activities (‘land sparing’), are proposed as two strategies for mediating the trade-off between agriculture and biodiversity. However, there is much debate regarding the efficacy of each strategy, as this trade-off has typically been addressed by short term studies at fine spatial scales. These studies ignore processes that are relevant to biodiversity at larger scales, such as meta-population dynamics and landscape connectivity. Therefore, to better understand species response to agricultural land-use and provide evidence to underpin the planning of better production landscapes, we need to determine the merits of each strategy at larger scales. In South Africa, a remarkable citizen science project - the South African Bird Atlas Project 2 (SABAP2) – collates an extensive dataset describing the occurrence of birds at a 5-min by 5-min grid cell resolution. We use these data, along with fine-resolution data on agricultural land-use, to determine which strategy optimises the agriculture-biodiversity trade-off in a southern African context, and at a spatial scale never considered before. To empirically test this trade-off, we model bird species population density, derived for each 5-min grid cell by Royle-Nicols single-species occupancy modelling, against both the amount and configuration of different types of agricultural production in the same 5-min grid cell. In using both production amount and configuration, we can show not only how species population densities react to changes in yield, but also describe the production landscape patterns most conducive to conservation. Furthermore, the extent of both the SABAP2 and land-cover datasets allows us to test this trade-off across multiple regions to determine if bird populations respond in a consistent way and whether results can be extrapolated to other landscapes. We tested the land sparing/sharing trade-off for 281 bird species across three different biomes in South Africa. Overall, a higher proportion of species are classified as losers, and would benefit from land sparing. However, this proportion of loser-sparers is not consistent and varies across biomes and the different types of agricultural production. This is most likely because of differences in the intensity of agricultural land-use and the interactions between the differing types of natural vegetation and agriculture. Interestingly, we observe a higher number of species that benefit from agriculture than anticipated, suggesting that agriculture is a legitimate resource for certain bird species. Our results support those seen at smaller scales and across vastly different agricultural systems, that land sparing benefits the most species. However, our analysis suggests that land sparing needs to be implemented at spatial scales much larger than previously considered. Species persistence in agricultural landscapes will require the conservation of large tracts of land, and is an important consideration in developing countries, which are undergoing rapid agricultural development.

Keywords: agriculture, birds, land sharing, land sparing

Procedia PDF Downloads 202
196 Cultivating Concentration and Flow: Evaluation of a Strategy for Mitigating Digital Distractions in University Education

Authors: Vera G. Dianova, Lori P. Montross, Charles M. Burke

Abstract:

In the digital age, the widespread and frequently excessive use of mobile phones amongst university students is recognized as a significant distractor which interferes with their ability to enter a deep state of concentration during studies and diminishes their prospects of experiencing the enjoyable and instrumental state of flow, as defined and described by psychologist M. Csikszentmihalyi. This study has targeted 50 university students with the aim of teaching them to cultivate their ability to engage in deep work and to attain the state of flow, fostering more effective and enjoyable learning experiences. Prior to the start of the intervention, all participating students completed a comprehensive survey based on a variety of validated scales assessing their inclination toward lifelong learning, frequency of flow experiences during study, frustration tolerance, sense of agency, as well as their love of learning and daily time devoted to non-academic mobile phone activities. Several days after this initial assessment, students received a 90-minute lecture on the principles of flow and deep work, accompanied by a critical discourse on the detrimental effects of excessive mobile phone usage. They were encouraged to practice deep work and strive for frequent flow states throughout the semester. Subsequently, students submitted weekly surveys, including the 10-item CORE Dispositional Flow Scale, a 3-item agency scale and furthermore disclosed their average daily hours spent on non-academic mobile phone usage. As a final step, at the end of the semester students engaged in reflective report writing, sharing their experiences and evaluating the intervention's effectiveness. They considered alterations in their love of learning, reflected on the implications of their mobile phone usage, contemplated improvements in their tolerance for boredom and perseverance in complex tasks, and pondered the concept of lifelong learning. Additionally, students assessed whether they actively took steps towards managing their recreational phone usage and towards improving their commitment to becoming lifelong learners. Employing a mixed-methods approach our study offers insights into the dynamics of concentration, flow, mobile phone usage and attitudes towards learning among undergraduate and graduate university students. The findings of this study aim to promote profound contemplation, on the part of both students and instructors, on the rapidly evolving digital-age higher education environment. In an era defined by digital and AI advancements, the ability to concentrate, to experience the state of flow, and to love learning has never been more crucial. This study underscores the significance of addressing mobile phone distractions and providing strategies for cultivating deep concentration. The insights gained can guide educators in shaping effective learning strategies for the digital age. By nurturing a love for learning and encouraging lifelong learning, educational institutions can better prepare students for a rapidly changing labor market, where adaptability and continuous learning are paramount for success in a dynamic career landscape.

Keywords: deep work, flow, higher education, lifelong learning, love of learning

Procedia PDF Downloads 59
195 Unmasking Virtual Empathy: A Philosophical Examination of AI-Mediated Emotional Practices in Healthcare

Authors: Eliana Bergamin

Abstract:

This philosophical inquiry, influenced by the seminal works of Annemarie Mol and Jeannette Pols, critically examines the transformative impact of artificial intelligence (AI) on emotional caregiving practices within virtual healthcare. Rooted in the traditions of philosophy of care, philosophy of emotions, and applied philosophy, this study seeks to unravel nuanced shifts in the moral and emotional fabric of healthcare mediated by AI-powered technologies. Departing from traditional empirical studies, the approach embraces the foundational principles of care ethics and phenomenology, offering a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. At its core, this research addresses the introduction of AI-powered technologies mediating emotional and care practices in the healthcare sector. By drawing on Mol and Pols' insights, the study offers a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. Anchored in ethnographic research within a pioneering private healthcare company in the Netherlands, this critical philosophical inquiry provides a unique lens into the dynamics of AI-mediated emotional practices. The study employs in-depth, semi-structured interviews with virtual caregivers and care receivers alongside ongoing ethnographic observations spanning approximately two and a half months. Delving into the lived experiences of those at the forefront of this technological evolution, the research aims to unravel subtle shifts in the emotional and moral landscape of healthcare, critically examining the implications of AI in reshaping the philosophy of care and human connection in virtual healthcare. Inspired by Mol and Pols' relational approach, the study prioritizes the lived experiences of individuals within the virtual healthcare landscape, offering a deeper understanding of the intertwining of technology, emotions, and the philosophy of care. In the realm of philosophy of care, the research elucidates how virtual tools, particularly those driven by AI, mediate emotions such as empathy, sympathy, and compassion—the bedrock of caregiving. Focusing on emotional nuances, the study contributes to the broader discourse on the ethics of care in the context of technological mediation. In the philosophy of emotions, the investigation examines how the introduction of AI alters the phenomenology of emotional experiences in caregiving. Exploring the interplay between human emotions and machine-mediated interactions, the nuanced analysis discerns implications for both caregivers and caretakers, contributing to the evolving understanding of emotional practices in a technologically mediated healthcare environment. Within applied philosophy, the study transcends empirical observations, positioning itself as a reflective exploration of the moral implications of AI in healthcare. The findings are intended to inform ethical considerations and policy formulations, bridging the gap between technological advancements and the enduring values of caregiving. In conclusion, this focused philosophical inquiry aims to provide a foundational understanding of the evolving landscape of virtual healthcare, drawing on the works of Mol and Pols to illuminate the essence of human connection, care, and empathy amid technological advancements.

Keywords: applied philosophy, artificial intelligence, healthcare, philosophy of care, philosophy of emotions

Procedia PDF Downloads 53
194 Digital Transformation in Fashion System Design: Tools and Opportunities

Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci

Abstract:

The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.

Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation

Procedia PDF Downloads 58
193 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 173
192 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 106
191 Taking the Good with the Bad: Psychological Well-Being and Social Integration in Russian-Speaking Immigrants in Montreal

Authors: Momoka Sunohara, Ashley J. Lemieux, Esther Yakobov, Andrew G. Ryder, Tomas Jurcik

Abstract:

Immigration brings changes in many aspects of an individual's life, from social support dynamics, to housing and language, as well as difficulties with regards to discrimination, trauma, and loss. Past research has mostly emphasized individual differences in mental health and has neglected the impact of social-ecological context, such as acculturation and ethnic density. Purpose: The present study aimed to assess the relationship between variables associated with social integration such as perceived ethnic density and ways of coping, as well as psychological adjustment in a rapidly growing non-visible minority group of immigrants in Canada. Data: A small subset of an archival data from our previously published study was reanalyzed with additional variables. Data included information from 269 Russian-Speaking immigrants in Montreal, Canada. Method: Canonical correlation analysis (CCA) investigated the relationship between two sets of variables. SAS PROC CANCORR was used to conduct CCA on a set of social integration variables, including ethnic density, discrimination, social support, family functioning, and acculturation, and a set of psychological well-being variables, including distress, depression, self-esteem, and life satisfaction. In addition, canonical redundancy analysis was performed to calculate the proportion of variances of original variables explained by their own canonical variates. Results: Significance tests using Rao’s F statistics indicated that the first two canonical correlations (i.e., r1 = 0.64, r2 = 0.40) were statistically significant (p-value < 0.0001). Additionally, canonical redundancy analysis showed that the first two well-being canonical variates explained separately 62.9% and 12.8% variances of the standardized well-being variables, whereas the first two social integration canonical variates explained separately 14.7% and 16.7% variances of the standardized social integration variables. These results support the selection of the first two canonical correlations. Then, we interpreted the derived canonical variates based on their canonical structure (i.e., correlations with original variables). Two observations can be concluded. First, individuals who have adequate social support, and who, as a family, cope by acquiring social support, mobilizing others and reframing are more likely to have better self-esteem, greater life satisfaction and experience less feelings of depression or distress. Second, individuals who feel discriminated yet rate higher on a mainstream acculturation scale, and who, as a family, cope by acquiring social support, mobilizing others and using spirituality, while using less passive strategies are more likely to have better life satisfaction but also higher degree of depression. Implications: This model may serve to explain the complex interactions that exist between social and emotional adjustment and aid in facilitating the integration of individuals immigrating into new communities. The same group may experience greater depression but paradoxically improved life satisfaction associated with their coping process. Such findings need to be placed in the context of Russian cultural values. For instance, some Russian-speakers may value the expression of negative emotions with significant others during the integration process; this in turn may make negative emotions more salient, but also facilitate a greater sense of family and community connection, as well as life satisfaction.

Keywords: acculturation, ethnic density, mental health, Russian-speaking

Procedia PDF Downloads 474
190 3D Seismic Acquisition Challenges in the NW Ghadames Basin Libya, an Integrated Geophysical Sedimentological and Subsurface Studies Approach as a Solution

Authors: S. Sharma, Gaballa Aqeelah, Tawfig Alghbaili, Ali Elmessmari

Abstract:

There were abrupt discontinuities in the Brute Stack in the northernmost locations during the acquisition of 2D (2007) and 3D (2021) seismic data in the northwest region of the Ghadames Basin, Libya. In both campaigns, complete fluid circulation loss was seen in these regions during up-hole drilling. Geophysics, sedimentology and shallow subsurface geology were all integrated to look into what was causing the seismic signal to disappear at shallow depths. The Upper Cretaceous Nalut Formation is the near-surface or surface formation in the studied area. It is distinguished by abnormally high resistivity in all the neighboring wells. The Nalut Formation in all the nearby wells from the present study and previous outcrop study suggests lithology of dolomite and chert/flint in nodular or layered forms. There are also reports of karstic caverns, vugs, and thick cracks, which all work together to produce the high resistivity. Four up-hole samples that were analyzed for microfacies revealed a near-coastal to tidal environment. Algal (Chara) infested deposits up to 30 feet thick and monotonous, very porous, are seen in two up-hole sediments; these deposits are interpreted to be scattered, continental algal travertine mounds. Chert/flint, dolomite, and calcite in varying amounts are confirmed by XRD analysis. Regional tracking of the high resistivity of the Nalut Formation, which is thought to be connected to the sea level drop that created the paleokarst layer, is possible. It is abruptly overlain by a blanket marine transgressive deposit caused by rapid sea level rise, which is a regional, relatively high radioactive layer of argillaceous limestone. The examined area's close proximity to the mountainous, E-W trending ridges of northern Libya made it easier for recent freshwater circulation, which later enhanced cavern development and mineralization in the paleokarst layer. Seismic signal loss at shallow depth is caused by extremely heterogeneous mineralogy of pore- filling or lack thereof. Scattering effect of shallow karstic layer on seismic signal has been well documented. Higher velocity inflection points at shallower depths in the northern part and deeper intervals in the southern part, in both cases at Nalut level, demonstrate the layer's influence on the seismic signal. During the Permian-Carboniferous, the Ghadames Basin underwent uplift and extensive erosion, which resulted in this karstic layer of the Nalut Formation uplifted to a shallow depth in the northern part of the studied area weakening the acoustic signal, whereas in the southern part of the 3D acquisition area the Nalut Formation remained at the deeper interval without affecting the seismic signal. Results from actions taken during seismic processing to deal with this signal loss are visible and have improved. This study recommends using denser spacing or dynamite to circumvent the karst layer in a comparable geographic area in order to prevent signal loss at lesser depths.

Keywords: well logging, seismic data acquisition, sesimic data processing, up-holes

Procedia PDF Downloads 79
189 Novel Aspects of Merger Control Pertaining to Nascent Acquisition: An Analytical Legal Research

Authors: Bhargavi G. Iyer, Ojaswi Bhagat

Abstract:

It is often noted that the value of a novel idea lies in its successful implementation. However, successful implementation requires the nurturing and encouragement of innovation. Nascent competitors are a true representation of innovation in any given industry. A nascent competitor is an entity whose prospective innovation poses a future threat to an incumbent dominant competitor. While a nascent competitor benefits in several ways, it is also exposed significantly and is at greater risk of facing the brunt of exclusionary practises and abusive conduct by dominant incumbent competitors in the industry. This research paper aims to explore the risks and threats faced by nascent competitors and analyse the benefits they accrue as well as the advantages they proffer to the economy; through an analytical, critical study. In such competitive market environments, a rise of the acquisitions of nascent competitors by the incumbent dominants is observed. Therefore, this paper will examine the dynamics of nascent acquisition. Further, this paper hopes to specifically delve into the role of antitrust bodies in regulating nascent acquisition. This paper also aspires to deal with the question how to distinguish harmful from harmless acquisitions in order to facilitate ideal enforcement practice. This paper proposes mechanisms of scrutiny in order to ensure healthy market practises and efficient merger control in the context of nascent acquisitions. Taking into account the scope and nature of the topic, as well as the resources available and accessible, a combination of the methods of doctrinal research and analytical research were employed, utilising secondary sources in order to assess and analyse the subject of research. While legally evaluating the Killer Acquisition theory and the Nascent Potential Acquisition theory, this paper seeks to critically survey the precedents and instances of nascent acquisitions. In addition to affording a compendious account of the legislative framework and regulatory mechanisms in the United States, the United Kingdom, and the European Union; it hopes to suggest an internationally practicable legal foundation for domestic legislation and enforcement to adopt. This paper hopes to appreciate the complexities and uncertainties with respect to nascent acquisitions and attempts to suggest viable and plausible policy measures in antitrust law. It additionally attempts to examine the effects of such nascent acquisitions upon the consumer and the market economy. This paper weighs the argument of shifting the evidentiary burden on to the merging parties in order to improve merger control and regulation and expounds on its discovery of the strengths and weaknesses of the approach. It is posited that an effective combination of factual, legal, and economic analysis of both the acquired and acquiring companies possesses the potential to improve ex post and ex ante merger review outcomes involving nascent companies; thus, preventing anti-competitive practises. This paper concludes with an analysis of the possibility and feasibility of industry-specific identification of anti-competitive nascent acquisitions and implementation of measures accordingly.

Keywords: acquisition, antitrust law, exclusionary practises merger control, nascent competitor

Procedia PDF Downloads 152
188 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 119
187 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 377
186 Fully Autonomous Vertical Farm to Increase Crop Production

Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek

Abstract:

New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.

Keywords: automation, vertical farming, robot, artificial intelligence, vision, control

Procedia PDF Downloads 24
185 Application and Aspects of Biometeorology in Inland Open Water Fisheries Management in the Context of Changing Climate: Status and Research Needs

Authors: U.K. Sarkar, G. Karnatak, P. Mishal, Lianthuamluaia, S. Kumari, S.K. Das, B.K. Das

Abstract:

Inland open water fisheries provide food, income, livelihood and nutritional security to millions of fishers across the globe. However, the open water ecosystem and fisheries are threatened due to climate change and anthropogenic pressures, which are more visible in the recent six decades, making the resources vulnerable. Understanding the interaction between meteorological parameters and inland fisheries is imperative to develop mitigation and adaptation strategies. As per IPCC 5th assessment report, the earth is warming at a faster rate in recent decades. Global mean surface temperature (GMST) for the decade 2006–2015 (0.87°C) was 6 times higher than the average over the 1850–1900 period. The direct and indirect impacts of climatic parameters on the ecology of fisheries ecosystem have a great bearing on fisheries due to alterations in fish physiology. The impact of meteorological factors on ecosystem health and fish food organisms brings about changes in fish diversity, assemblage, reproduction and natural recruitment. India’s average temperature has risen by around 0.7°C during 1901–2018. The studies show that the mean air temperature in the Ganga basin has increased in the range of 0.20 - 0.47 °C and annual rainfall decreased in the range of 257-580 mm during the last three decades. The studies clearly indicate visible impacts of climatic and environmental factors on inland open water fisheries. Besides, a significant reduction in-depth and area (37.20–57.68% reduction), diversity of natural indigenous fish fauna (ranging from 22.85 to 54%) in wetlands and progression of trophic state from mesotrophic to eutrophic were recorded. In this communication, different applications of biometeorology in inland fisheries management with special reference to the assessment of ecosystem and species vulnerability to climatic variability and change have been discussed. Further, the paper discusses the impact of climate anomaly and extreme climatic events on inland fisheries and emphasizes novel modeling approaches for understanding the impact of climatic and environmental factors on reproductive phenology for identification of climate-sensitive/resilient fish species for the adoption of climate-smart fisheries in the future. Adaptation and mitigation strategies to enhance fish production and the role of culture-based fisheries and enclosure culture in converting sequestered carbon into blue carbon have also been discussed. In general, the type and direction of influence of meteorological parameters on fish biology in open water fisheries ecosystems are not adequately understood. The optimum range of meteorological parameters for sustaining inland open water fisheries is yet to be established. Therefore, the application of biometeorology in inland fisheries offers ample scope for understanding the dynamics in changing climate, which would help to develop a database on such least, addressed research frontier area. This would further help to project fisheries scenarios in changing climate regimes and develop adaptation and mitigation strategies to cope up with adverse meteorological factors to sustain fisheries and to conserve aquatic ecosystem and biodiversity.

Keywords: biometeorology, inland fisheries, aquatic ecosystem, modeling, India

Procedia PDF Downloads 184
184 Hydrodynamics in Wetlands of Brazilian Savanna: Electrical Tomography and Geoprocessing

Authors: Lucas M. Furlan, Cesar A. Moreira, Jepherson F. Sales, Guilherme T. Bueno, Manuel E. Ferreira, Carla V. S. Coelho, Vania Rosolen

Abstract:

Located in the western part of the State of Minas Gerais, Brazil, the study area consists of a savanna environment, represented by sedimentary plateau and a soil cover composed by lateritic and hydromorphic soils - in the latter, occurring the deferruginization and concentration of high-alumina clays, exploited as refractory material. In the hydromorphic topographic depressions (wetlands) the hydropedogical relationships are little known, but it is observed that in times of rainfall, the depressed region behaves like a natural seasonal reservoir - which suggests that the wetlands on the surface of the plateau are places of recharge of the aquifer. The aquifer recharge areas are extremely important for the sustainable social, economic and environmental development of societies. The understanding of hydrodynamics in relation to the functioning of the ferruginous and hydromorphic lateritic soils system in the savanna environment is a subject rarely explored in the literature, especially its understanding through the joint application of geoprocessing by UAV (Unmanned Aerial Vehicle) and electrical tomography. The objective of this work is to understand the hydrogeological dynamics in a wetland (with an area of 426.064 m²), in the Brazilian savanna,as well as the understanding of the subsurface architecture of hydromorphic depressions in relation to the recharge of aquifers. The wetland was compartmentalized in three different regions, according to the geoprocessing. Hydraulic conductivity studies were performed in each of these three portions. Electrical tomography was performed on 9 lines of 80 meters in length and spaced 10 meters apart (direction N45), and a line with 80 meters perpendicular to all others. With the data, it was possible to generate a 3D cube. The integrated analysis showed that the area behaves like a natural seasonal reservoir in the months of greater precipitation (December – 289mm; January – 277,9mm; February – 213,2mm), because the hydraulic conductivity is very low in all areas. In the aerial images, geotag correction of the images was performed, that is, the correction of the coordinates of the images by means of the corrected coordinates of the Positioning by Precision Point of the Brazilian Institute of Geography and Statistics (IBGE-PPP). Later, the orthomosaic and the digital surface model (DSM) were generated, which with specific geoprocessing generated the volume of water that the wetland can contain - 780,922m³ in total, 265,205m³ in the region with intermediate flooding and 49,140m³ in the central region, where a greater accumulation of water was observed. Through the electrical tomography it was possible to identify that up to the depth of 6 meters the water infiltrates vertically in the central region. From the 8 meters depth, the water encounters a more resistive layer and the infiltration begins to occur horizontally - tending to concentrate the recharge of the aquifer to the northeast and southwest of the wetland. The hydrodynamics of the area is complex and has many challenges in its understanding. The next step is to relate hydrodynamics to the evolution of the landscape, with the enrichment of high-alumina clays, and to propose a management model for the seasonal reservoir.

Keywords: electrical tomography, hydropedology, unmanned aerial vehicle, water resources management

Procedia PDF Downloads 135