Search results for: field experiences
184 Transition Metal Bis(Dicarbollide) Complexes in Design of Molecular Switches
Authors: Igor B. Sivaev
Abstract:
Design of molecular machines is an extraordinary growing and very important area of research that it was recognized by awarding Sauvage, Stoddart and Feringa the Nobel Prize in Chemistry in 2016 'for the design and synthesis of molecular machines'. Based on the type of motion being performed, molecular machines can be divided into two main types: molecular motors and molecular switches. Molecular switches are molecules or supramolecular complexes having bistability, i.e., the ability to exist in two or more stable forms, among which may be reversible transitions under external influence (heating, lighting, changing the medium acidity, the action of chemicals, exposure to magnetic or electric field). Molecular switches are the main structural element of any molecular electronics devices. Therefore, the design and the study of molecules and supramolecular systems capable of performing mechanical movement is an important and urgent problem of modern chemistry. There is growing interest in molecular switches and other devices of molecular electronics based on transition metal complexes; therefore choice of suitable stable organometallic unit is of great importance. An example of such unit is bis(dicarbollide) complexes of transition metals [3,3’-M(1,2-C₂B₉H₁₁)₂]ⁿ⁻. The control on the ligand rotation in such complexes can be reached by introducing substituents which could provide stabilization of certain rotamers due to specific interactions between the ligands, on the one hand, and which can participate as Lewis bases in complex formation with external metals resulting in a change in the rotation angle of the ligands, on the other hand. A series of isomeric methyl sulfide derivatives of cobalt bis(dicarbollide) complexes containing methyl sulfide substituents at boron atoms in different positions of the pentagonal face of the dicarbollide ligands [8,8’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻, rac-[4,4’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻ and meso-[4,7’-(MeS)₂-3,3’-Co(1,2-C₂B₉H₁₀)₂]⁻ were synthesized by the reaction of CoCl₂ with the corresponding methyl sulfide carborane derivatives [10-MeS-7,8-C₂B₉H₁₁)₂]⁻ and [10-MeS-7,8-C₂B₉H₁₁)₂]⁻. In the case of asymmetrically substituted cobalt bis(dicarbollide) complexes the corresponding rac- and meso-isomers were successfully separated by column chromatography as the tetrabutylammonium salts. The compounds obtained were studied by the methods of ¹H, ¹³C, and ¹¹B NMR spectroscopy, single crystal X-ray diffraction, cyclic voltammetry, controlled potential coulometry and quantum chemical calculations. It was found that in the solid state, the transoid- and gauche-conformations of the 8,8’- and 4,4’-isomers are stabilized by four intramolecular CH···S(Me)B hydrogen bonds each one (2.683-2.712 Å and 2.709-2.752 Å, respectively), whereas gauche-conformation of the 4,7’-isomer is stabilized by two intramolecular CH···S hydrogen bonds (2.699-2.711 Å). The existence of the intramolecular CH·S(Me)B hydrogen bonding in solutions was supported by the 1H NMR spectroscopy. These data are in a good agreement with results of the quantum chemical calculations. The corresponding iron and nickel complexes were synthesized as well. The reaction of the methyl sulfide derivatives of cobalt bis(dicarbollide) with various labile transition metal complexes results in rupture of intramolecular hydrogen bonds and complexation of the methyl sulfide groups with external metal. This results in stabilization of other rotational conformation of cobalt bis(dicarbollide) and can be used in design of molecular switches. This work was supported by the Russian Science Foundation (16-13-10331).Keywords: molecular switches, NMR spectroscopy, single crystal X-ray diffraction, transition metal bis(dicarbollide) complexes, quantum chemical calculations
Procedia PDF Downloads 169183 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 173182 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows
Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld
Abstract:
Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV
Procedia PDF Downloads 85181 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 70180 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China
Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han
Abstract:
Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.Keywords: education policy-making, educational research, educational think tank, higher institution
Procedia PDF Downloads 157179 Post-bladder Catheter Infection
Authors: Mahla Azimi
Abstract:
Introduction: Post-bladder catheter infection is a common and significant healthcare-associated infection that affects individuals with indwelling urinary catheters. These infections can lead to various complications, including urinary tract infections (UTIs), bacteremia, sepsis, and increased morbidity and mortality rates. This article aims to provide a comprehensive review of post-bladder catheter infections, including their causes, risk factors, clinical presentation, diagnosis, treatment options, and preventive measures. Causes and Risk Factors: Post-bladder catheter infections primarily occur due to the colonization of microorganisms on the surface of the urinary catheter. The most common pathogens involved are Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Enterococcus species. Several risk factors contribute to the development of these infections, such as prolonged catheterization duration, improper insertion technique, poor hygiene practices during catheter care, compromised immune system function in patients with underlying conditions or immunosuppressive therapy. Clinical Presentation: Patients with post-bladder catheter infections may present with symptoms such as fever, chills, malaise, suprapubic pain or tenderness, and cloudy or foul-smelling urine. In severe cases or when left untreated for an extended period of time, patients may develop more severe symptoms like hematuria or signs of systemic infection. Diagnosis: The diagnosis of post-bladder catheter infection involves a combination of clinical evaluation and laboratory investigations. Urinalysis is crucial in identifying pyuria (presence of white blood cells) and bacteriuria (presence of bacteria). A urine culture is performed to identify the causative organism(s) and determine its antibiotic susceptibility profile. Treatment Options: Prompt initiation of appropriate antibiotic therapy is essential in managing post-bladder catheter infections. Empirical treatment should cover common pathogens until culture results are available. The choice of antibiotics should be guided by local antibiogram data to ensure optimal therapy. In some cases, catheter removal may be necessary, especially if the infection is recurrent or associated with severe complications. Preventive Measures: Prevention plays a vital role in reducing the incidence of post-bladder catheter infections. Strategies include proper hand hygiene, aseptic technique during catheter insertion and care, regular catheter maintenance, and timely removal of unnecessary catheters. Healthcare professionals should also promote patient education regarding self-care practices and signs of infection. Conclusion: Post-bladder catheter infections are a significant healthcare concern that can lead to severe complications and increased healthcare costs. Early recognition, appropriate diagnosis, and prompt treatment are crucial in managing these infections effectively. Implementing preventive measures can significantly reduce the incidence of post-bladder catheter infections and improve patient outcomes. Further research is needed to explore novel strategies for prevention and management in this field.Keywords: post-bladder catheter infection, urinary tract infection, bacteriuria, indwelling urinary catheters, prevention
Procedia PDF Downloads 80178 Challenges and Lessons of Mentoring Processes for Novice Principals: An Exploratory Case Study of Induction Programs in Chile
Authors: Carolina Cuéllar, Paz González
Abstract:
Research has shown that school leadership has a significant indirect effect on students’ achievements. In Chile, evidence has also revealed that this impact is stronger in vulnerable schools. With the aim of strengthening school leadership, public policy has taken up the challenge of enhancing capabilities of novice principals through the implementation of induction programs, which include a mentoring component, entrusting the task of delivering these programs to universities. The importance of using mentoring or coaching models in the preparation of novice school leaders has been emphasized in the international literature. Thus, it can be affirmed that building leadership capacity through partnership is crucial to facilitate cognitive and affective support required in the initial phase of the principal career, gain role clarification and socialization in context, stimulate reflective leadership practice, among others. In Chile, mentoring is a recent phenomenon in the field of school leadership and it is even more new in the preparation of new principals who work in public schools. This study, funded by the Chilean Ministry of Education, sought to explore the challenges and lessons arising from the design and implementation of mentoring processes which are part of the induction programs, according to the perception of the different actors involved: ministerial agents, university coordinators, mentors and novice principals. The investigation used a qualitative design, based on a study of three cases (three induction programs). The sources of information were 46 semi-structured interviews, applied in two moments (at the beginning and end of mentoring). Content analysis technique was employed. Data focused on the uniqueness of each case and the commonalities within the cases. Five main challenges and lessons emerged in the design and implementation of mentoring within the induction programs for new principals from Chilean public schools. They comprised the need of (i) developing a shared conceptual framework on mentoring among the institutions and actors involved, which helps align the expectations for the mentoring component within the induction programs, along with assisting in establishing a theory of action of mentoring that is relevant to the public school context; (ii) recognizing trough actions and decisions at different levels that the role of a mentor differs from the role of a principal, which challenge the idea that an effective principal will always be an effective mentor; iii) improving mentors’ selection and preparation processes trough the definition of common guiding criteria to ensure that a mentor takes responsibility for developing critical judgment of novice principals, which implies not limiting the mentor’s actions to assist in the compliance of prescriptive practices and standards; (iv) generating common evaluative models with goals, instruments and indicators consistent with the characteristics of mentoring processes, which helps to assess expected results and impact; and (v) including the design of a mentoring structure as an outcome of the induction programs, which helps sustain mentoring within schools as a collective professional development practice. Results showcased interwoven elements that entail continuous negotiations at different levels. Taking action will contribute to policy efforts aimed at professionalizing the leadership role in public schools.Keywords: induction programs, mentoring, novice principals, school leadership preparation
Procedia PDF Downloads 125177 Implementation of Smart Card Automatic Fare Collection Technology in Small Transit Agencies for Standards Development
Authors: Walter E. Allen, Robert D. Murray
Abstract:
Many large transit agencies have adopted RFID technology and electronic automatic fare collection (AFC) or smart card systems, but small and rural agencies remain tied to obsolete manual, cash-based fare collection. Small countries or transit agencies can benefit from the implementation of smart card AFC technology with the promise of increased passenger convenience, added passenger satisfaction and improved agency efficiency. For transit agencies, it reduces revenue loss, improves passenger flow and bus stop data. For countries, further implementation into security, distribution of social services or currency transactions can provide greater benefits. However, small countries or transit agencies cannot afford expensive proprietary smart card solutions typically offered by the major system suppliers. Deployment of Contactless Fare Media System (CFMS) Standard eliminates the proprietary solution, ultimately lowering the cost of implementation. Acumen Building Enterprise, Inc. chose the Yuma County Intergovernmental Public Transportation Authority (YCIPTA) existing proprietary YCAT smart card system to implement CFMS. The revised system enables the purchase of fare product online with prepaid debit or credit cards using the Payment Gateway Processor. Open and interoperable smart card standards for transit have been developed. During the 90-day Pilot Operation conducted, the transit agency gathered the data from the bus AcuFare 200 Card Reader, loads (copies) the data to a USB Thumb Drive and uploads the data to the Acumen Host Processing Center for consolidation of the data into the transit agency master data file. The transition from the existing proprietary smart card data format to the new CFMS smart card data format was transparent to the transit agency cardholders. It was proven that open standards and interoperability design can work and reduce both implementation and operational costs for small transit agencies or countries looking to expand smart card technology. Acumen was able to avoid the implementation of the Payment Card Industry (PCI) Data Security Standards (DSS) which is expensive to develop and costly to operate on a continuing basis. Due to the substantial additional complexities of implementation and the variety of options presented to the transit agency cardholder, Acumen chose to implement only the Directed Autoload. To improve the implementation efficiency and the results for a similar undertaking, it should be considered that some passengers lack credit cards and are averse to technology. There are more than 1,300 small and rural agencies in the United States. This grows by 10 fold when considering small countries or rural locations throughout Latin American and the world. Acumen is evaluating additional countries, sites or transit agency that can benefit from the smart card systems. Frequently, payment card systems require extensive security procedures for implementation. The Project demonstrated the ability to purchase fare value, rides and passes with credit cards on the internet at a reasonable cost without highly complex security requirements.Keywords: automatic fare collection, near field communication, small transit agencies, smart cards
Procedia PDF Downloads 282176 The Role of the New Silk Road (One Belt, One Road Initiative) in Connecting the Free Zones of Iran and Turkey: A Case Study of the Free Zones of Sarakhs and Maku to Anatolia and Europe
Authors: Morteza Ghourchi, Meraj Jafari, Atena Soheilazizi
Abstract:
Today, with the globalization of communications and the connection of countries within the framework of the global economy, free zones play the most important role as the engine of global economic development and globalization of countries. In this regard, corridors have a fundamental role in linking countries and free zones physically with each other. One of these corridors is the New Silk Road corridor (One Belt, One Road initiative), which is being built by China to connect with European countries. In connecting this corridor to European countries, Iran and Turkey are among the countries that play an important role in linking China to European countries through this corridor. The New Silk Road corridor, by connecting Iran’s free zones (Sarakhs and Maku) and Turkey’s free zones (Anatolia and Europe), can provide the best opportunity for expanding economic cooperation and regional development between Iran and Turkey. It can also provide economic links between Iran and Turkey with Central Asian countries and especially the port of Khorgos. On the other hand, it can expand Iran-Turkey economic relations more than ever before with Europe in a vast economic network. The research method was descriptive-analytical, using library resources, documents of Iranian free zones, and the Internet. In an interview with Fars News Agency, Mohammad Reza Kalaei, CEO of Sarakhs Free Zone, said that the main goal of Sarakhs Special Economic Zone is to connect Iran with the Middle East and create a transit corridor towards East Asian countries, including Turkey. Also, according to an interview with Hussein Gharousi, CEO of Maku Free Zone, the importance of this region is due to the fact that Maku Free Zone, due to its geographical location and its position on the China-Europe trade route, the East-West corridor, which is the closest point to the European Union through railway and transit routes, and also due to its proximity to Eurasian countries, is an ideal opportunity for industrial and technological companies. Creating a transit corridor towards East Asian countries, including Turkey, is one of the goals of this project Free zones between Iran and Turkey can sign an agreement within the framework of the New Silk Road to expand joint investments and economic cooperation towards regional convergence. The purpose of this research is to develop economic links between Iranian and Turkish free zones along the New Silk Road, which will lead to the expansion and development of regional cooperation between the two countries within the framework of neighboring policies. The findings of this research include the development of economic diplomacy between the Secretariat of the Supreme Council of Free Zones of Iran and the General Directorate of Free Zones of Turkey, the agreement to expand cooperation between the free zones of Sarakhs, Maku, Anatolia, and Europe, holding biennial conferences between Iranian free zones along the New Silk Road with Turkish free zones, creating a joint investment fund between Iran and Turkey in the field of developing free zones along the Silk Road, helping to attract tourism between Iranian and Turkish free zones located along the New Silk Road, improving transit infrastructure and transportation to better connect Iranian free zones to Turkish free zones, communicating with China, and creating joint collaborations between China’s dry ports and its free zones with Iranian and Turkish free zones.Keywords: network economy, new silk road (one belt, one road initiative), free zones (Sarakhs, Maku, Anatolia, Europe), regional development, neighborhood policies
Procedia PDF Downloads 64175 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty
Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)
Abstract:
In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator
Procedia PDF Downloads 87174 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 261173 International Broadcasting of Public Diplomacy in the Era of Social Media in Nigeria
Authors: Henry Okechukwu Onyeiwu
Abstract:
In today’s Nigerian digital age, the landscape of public diplomacy has been significantly altered by the rise of social media platforms like YouTube, Facebook, Twitter, and Instagram. In recent years, social media platforms have emerged as powerful tools for public diplomacy, transforming how countries communicate with both domestic and global audiences. International broadcasting as a tool of public diplomacy has undergone a significant transformation. Traditional methods of state-run media and controlled broadcasting have evolved to incorporate the dynamic, interactive, and decentralized nature of digital platforms. Understanding how Nigerian governments engages in international broadcasting of public diplomacy, the influence of social media on broadcasting public diplomacy, focusing on the advantages and disadvantages of controlling media outlets for diplomatic purposes and also covers the changing nature of global communication in this digital era. As countries navigate the complexities of international relations, the effectiveness of controlled media in shaping public perception and engagement raises significant questions worth exploring. The vast amount of content available can make it challenging to capture and retain audience attention. The ease of spreading false information on social media requires international broadcasters to maintain credibility and counteract misleading narratives. Addressing these challenges requires a comprehensive research that integrates digital communication tools, cultural sensitivity, cybersecurity measures and ongoing evaluation to enhance Nigeria’s international broadcasting of public diplomacy. This study employed a mixed-methods approach, combining qualitative and quantitative research methods. A content analysis of Nigeria’s international broadcasting content was conducted to assess its themes, narratives, and engagement strategies. Additionally, surveys and interviews with communications professionals, diplomats, and social media users were carried out to gather insights on perceptions and effectiveness of public diplomacy initiatives. It has highlighted some of the present trends in technology and the international environmental in which public diplomacy must work, and show how the past can illuminate the road for those navigating this new world. The rise of the social network creates more opportunities than it closes for public diplomacy. This evolution highlights the increasing importance of engagement, mutual understanding, and cooperation in international relations. By Adopting a more inclusive and participatory approach, public diplomacy can more effectively address global challenges and build stronger, more resilient relationships between nations. As Nigeria navigates the complexities of its international relations, this abstract will provide a vital examination of how it can better utilize the dual platforms of international broadcasting and social media in its public diplomacy efforts. The outcome will bear significance not only for Nigeria but also for other nations grappling with similar challenges in the digital age. As social media continues to play a crucial role in public diplomacy, understanding the dynamics of controlled media outlets becomes ever more critical. This abstract shed light on the advantages and disadvantages of such control, ultimately contributing valuable insights to practitioners in the field of diplomacy as they adapt to the rapidly changing communication landscape.Keywords: international broadcasting, public diplomacy, social media, international relation, polities
Procedia PDF Downloads 28172 Species Profiling of White Grub Beetles and Evaluation of Pre and Post Sown Application of Insecticides against White Grub Infesting Soybean
Authors: Ajay Kumar Pandey, Mayank Kumar
Abstract:
White grub (Coleoptera: Scarabaeidae) is a major destructive pest in western Himalayan region of Uttarakhand. Beetles feed on apple, apricot, plum, walnut etc. during night while, second and third instar grubs feed on live roots of cultivated as well as non-cultivated crops. Collection and identification of scarab beetles through light trap was carried out at Crop Research Centre, Govind Ballab Pant University Pantnagar, Udham Singh Nagar (Uttarakhand) during 2018. Field trials were also conducted in 2018 to evaluate pre and post sown application of different insecticides against the white grub infesting soybean. The insecticides like Carbofuran 3 Granule (G) (750 g a.i./ha), Clothianidin 50 Water Dispersal Granule (WG) (120 g a.i./ha), Fipronil 0.3 G (50 g a.i./ha), Thiamethoxam 25 WG (80 g a.i./ha), Imidacloprid 70 WG (300 g a.i./ha), Chlorantraniliprole 0.4% G(100 g a.i./ha) and mixture of Fipronil 40% and Imidacloprid 40% WG (300 g a.i./ha) were applied at the time of sowing in pre sown experiment while same dosage of insecticides were applied in standing soybean crop during (first fortnight of July). Commutative plant mortality data were recorded after 20, 40, 60 days intervals and compared with untreated control. Total 23 species of white grub beetles recorded on the light trap and Holotrichia serrata Fabricious (Coleoptera: Melolonthinae) was found to be predominant species by recording 20.6% relative abundance out of the total light trap catch (i.e. 1316 beetles) followed by Phyllognathus sp. (14.6% relative abundance). H. rosettae and Heteronychus lioderus occupied third and fourth rank with 11.85% and 9.65% relative abundance, respectively. The emergence of beetles of predominant species started from 15th March, 2018. In April, average light trap catch was 382 white grub beetles, however, peak emergence of most of the white grub species was observed from June to July, 2018 i.e. 336 beetles in June followed by 303 beetles in the July. On the basis of the emergence pattern of white grub beetles, it may be concluded that the Peak Emergence Period (PEP) for the beetles of H. serrata was second fortnight of April for the total period of 15 days. In May, June and July relatively low population of H. serrata was observed. A decreasing trend in light trap catch was observed and went on till September during the study. No single beetle of H. serrata was observed on light trap from September onwards. The cumulative plant mortality data in both the experiments revealed that all the insecticidal treatments were significantly superior in protection-wise (6.49-16.82% cumulative plant mortality) over untreated control where highest plant mortality was 17.28 to 39.65% during study. The mixture of Fipronil 40% and Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective having lowest plant mortality i.e. 9.29 and 10.94% in pre and post sown crop, followed by Clothianidin 50 WG (120 g a.i. per ha) where the plant mortality was 10.57 and 11.93% in pre and post sown treatments, respectively. Both treatments were found significantly at par among each other. Production-wise, all the insecticidal treatments were found statistically superior (15.00-24.66 q per ha grain yields) over untreated control where the grain yield was 8.25 & 9.13 q per ha. Treatment Fipronil 40% + Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective and significantly superior over Imidacloprid 70WG applied at the rate of 300 g a.i. per ha.Keywords: bio efficacy, insecticide, soybean, white grub
Procedia PDF Downloads 127171 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops
Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh
Abstract:
The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.Keywords: Spodoptera litura, insecticides, toxicity, resistance
Procedia PDF Downloads 342170 The Study of Fine and Nanoscale Gold in the Ores of Primary Deposits and Gold-Bearing Placers of Kazakhstan
Authors: Omarova Gulnara, Assubayeva Saltanat, Tugambay Symbat, Bulegenov Kanat
Abstract:
The article discusses the problem of developing a methodology for studying thin and nanoscale gold in ores and placers of primary deposits, which will allow us to develop schemes for revealing dispersed gold inclusions and thus improve its recovery rate to increase the gold reserves of the Republic of Kazakhstan. The type of studied gold, is characterized by a number of features. In connection with this, the conditions of its concentration and distribution in ore bodies and formations, as well as the possibility of reliably determining it by "traditional" methods, differ significantly from that of fine gold (less than 0.25 microns) and even more so from that of larger grains. The mineral composition of rocks (metasomatites) and gold ore and the mineralization associated with them were studied in detail on the Kalba ore field in Kazakhstan. Mineralized zones were identified, and samples were taken from them for analytical studies. The research revealed paragenetic relationships of newly formed mineral formations at the nanoscale, which makes it possible to clarify the conditions for the formation of deposits with a particular type of mineralization. This will provide significant assistance in developing a scheme for study. Typomorphic features of gold were revealed, and mechanisms of formation and aggregation of gold nanoparticles were proposed. The presence of a large number of particles isolated at the laboratory stage from concentrates of gravitational enrichment can serve as an indicator of the presence of even smaller particles in the object. Even the most advanced devices based on gravitational methods for gold concentration provide extraction of metal at a level of around 50%, while pulverized metal is extracted much worse, and gold of less than 1 micron size is extracted at only a few percent. Therefore, when particles of gold smaller than 10 microns are detected, their actual numbers may be significantly higher than expected. In particular, at the studied sites, enrichment of slurry and samples with volumes up to 1 m³ was carried out using a screw lock or separator to produce a final concentrate weighing up to several kilograms. Free gold particles were extracted from the concentrates in the laboratory using a number of processes (magnetic and electromagnetic separation, washing with bromoform in a cup to obtain an ultracontentrate, etc.) and examined under electron microscopes to investigate the nature of their surface and chemical composition. The main result of the study was the detection of gold nanoparticles located on the surface of loose metal grains. The most characteristic forms of gold secretions are individual nanoparticles and aggregates of different configurations. Sometimes, aggregates form solid dense films, deposits, and crusts, all of which are confined to the negative forms of the nano- and microrelief on the surfaces of golden. The results will provide significant knowledge about the prevalence and conditions for the distribution of fine and nanoscale gold in Kazakhstan deposits, as well as the development of methods for studying it, which will minimize losses of this type of gold during extraction. Acknowledgments: This publication has been produced within the framework of the Grant "Development of methodology for studying fine and nanoscale gold in ores of primary deposits, placers and products of their processing" (АР23485052, №235/GF24-26).Keywords: electron microscopy, microminerology, placers, thin and nanoscale gold
Procedia PDF Downloads 18169 Deep Learning for SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network
Procedia PDF Downloads 66168 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 275167 Absenteeism in Polytechnical University Studies: Quantification and Identification of the Causes at Universitat Politècnica de Catalunya
Authors: E. Mas de les Valls, M. Castells-Sanabra, R. Capdevila, N. Pla, Rosa M. Fernandez-Canti, V. de Medina, A. Mujal, C. Barahona, E. Velo, M. Vigo, M. A. Santos, T. Soto
Abstract:
Absenteeism in universities, including polytechnical universities, is influenced by a variety of factors. Some factors overlap with those causing absenteeism in schools, while others are specific to the university and work-related environments. Indeed, these factors may stem from various sources, including students, educators, the institution itself, or even the alignment of degree curricula with professional requirements. In Spain, there has been an increase in absenteeism in polytechnical university studies, especially after the Covid crisis, posing a significant challenge for institutions to address. This study focuses on Universitat Politècnica de Catalunya• BarcelonaTech (UPC) and aims to quantify the current level of absenteeism and identify its main causes. The study is part of the teaching innovation project ASAP-UPC, which aims to minimize absenteeism through the redesign of teaching methodologies. By understanding the factors contributing to absenteeism, the study seeks to inform the subsequent phases of the ASAP-UPC project, which involve implementing methodologies to minimize absenteeism and evaluating their effectiveness. The study utilizes surveys conducted among students and polytechnical companies. Students' perspectives are gathered through both online surveys and in-person interviews. The surveys inquire about students' interest in attending classes, skill development throughout their UPC experience, and their perception of the skills required for a career in a polytechnical field. Additionally, polytechnical companies are surveyed regarding the skills they seek in prospective employees. The collected data is then analyzed to identify patterns and trends. This analysis involves organizing and categorizing the data, identifying common themes, and drawing conclusions based on the findings. This mixed-method approach has revealed that higher levels of absenteeism are observed in large student groups at both the Bachelor's and Master's degree levels. However, the main causes of absenteeism differ between these two levels. At the Bachelor's level, many students express dissatisfaction with in-person classes, perceiving them as overly theoretical and lacking a balance between theory, experimental practice, and problem-solving components. They also find a lack of relevance to professional needs. Consequently, they resort to using online available materials developed during the Covid crisis and attending private academies for exam preparation instead. On the other hand, at the Master's level, absenteeism primarily arises from schedule incompatibility between university and professional work. There is a discrepancy between the skills highly valued by companies and the skills emphasized during the studies, aligning partially with students' perceptions. These findings are of theoretical importance as they shed light on areas that can be improved to offer a more beneficial educational experience to students at UPC. The study also has potential applicability to other polytechnic universities, allowing them to adapt the surveys and apply the findings to their specific contexts. By addressing the identified causes of absenteeism, universities can enhance the educational experience and better prepare students for successful careers in polytechnical fields.Keywords: absenteeism, polytechnical studies, professional skills, university challenges
Procedia PDF Downloads 67166 Mineralized Nanoparticles as a Contrast Agent for Ultrasound and Magnetic Resonance Imaging
Authors: Jae Won Lee, Kyung Hyun Min, Hong Jae Lee, Sang Cheon Lee
Abstract:
To date, imaging techniques have attracted much attention in medicine because the detection of diseases at an early stage provides greater opportunities for successful treatment. Consequently, over the past few decades, diverse imaging modalities including magnetic resonance (MR), positron emission tomography, computed tomography, and ultrasound (US) have been developed and applied widely in the field of clinical diagnosis. However, each of the above-mentioned imaging modalities possesses unique strengths and intrinsic weaknesses, which limit their abilities to provide accurate information. Therefore, multimodal imaging systems may be a solution that can provide improved diagnostic performance. Among the current medical imaging modalities, US is a widely available real-time imaging modality. It has many advantages including safety, low cost and easy access for patients. However, its low spatial resolution precludes accurate discrimination of diseased region such as cancer sites. In contrast, MR has no tissue-penetrating limit and can provide images possessing exquisite soft tissue contrast and high spatial resolution. However, it cannot offer real-time images and needs a comparatively long imaging time. The characteristics of these imaging modalities may be considered complementary, and the modalities have been frequently combined for the clinical diagnostic process. Biominerals such as calcium carbonate (CaCO3) and calcium phosphate (CaP) exhibit pH-dependent dissolution behavior. They demonstrate pH-controlled drug release due to the dissolution of minerals in acidic pH conditions. In particular, the application of this mineralization technique to a US contrast agent has been reported recently. The CaCO3 mineral reacts with acids and decomposes to generate calcium dioxide (CO2) gas in an acidic environment. These gas-generating mineralized nanoparticles generated CO2 bubbles in the acidic environment of the tumor, thereby allowing for strong echogenic US imaging of tumor tissues. On the basis of this previous work, it was hypothesized that the loading of MR contrast agents into the CaCO3 mineralized nanoparticles may be a novel strategy in designing a contrast agent for dual imaging. Herein, CaCO3 mineralized nanoparticles that were capable of generating CO2 bubbles to trigger the release of entrapped MR contrast agents in response to tumoral acidic pH were developed for the purposes of US and MR dual-modality imaging of tumors. Gd2O3 nanoparticles were selected as an MR contrast agent. A key strategy employed in this study was to prepare Gd2O3 nanoparticle-loaded mineralized nanoparticles (Gd2O3-MNPs) using block copolymer-templated CaCO3 mineralization in the presence of calcium cations (Ca2+), carbonate anions (CO32-) and positively charged Gd2O3 nanoparticles. The CaCO3 core was considered suitable because it may effectively shield Gd2O3 nanoparticles from water molecules in the blood (pH 7.4) before decomposing to generate CO2 gas, triggering the release of Gd2O3 nanoparticles in tumor tissues (pH 6.4~7.4). The kinetics of CaCO3 dissolution and CO2 generation from the Gd2O3-MNPs were examined as a function of pH and pH-dependent in vitro magnetic relaxation; additionally, the echogenic properties were estimated to demonstrate the potential of the particles for the tumor-specific US and MR imaging.Keywords: calcium carbonate, mineralization, ultrasound imaging, magnetic resonance imaging
Procedia PDF Downloads 235165 The Impact of the Media in the Implementation of Qatar’s Foreign Policy on the Public Opinion of the People of the Middle East (2011-2023)
Authors: Negar Vkilbashi, Hassan Kabiri
Abstract:
Modern diplomacy, in its general form, refers to the people and not the governments, and diplomacy tactics are more addressed to the people than to the governments. Media diplomacy and cyber diplomacy are also one of the sub-branches of public diplomacy and, in fact, the role of media in the process of influencing public opinion and directing foreign policy. Mass media, including written, radio and television, theater, satellite, internet, and news agencies, transmit information and demands. What the Qatari government tried to implement in the countries of the region during the Arab Spring and after was through its important media, Al Jazeera. The embargo on Qatar began in 2017, when Saudi Arabia, the United Arab Emirates, Bahrain, and Egypt imposed a land, sea, and air blockade against the country. The media tool constitutes the cornerstone of soft power in the field of foreign policy, which Qatari leaders have consistently resorted to over the past two decades. Undoubtedly, the role it played in covering the events of the Arab Spring has created geopolitical tensions. The United Arab Emirates and other neighboring countries sometimes criticize Al Jazeera for providing a platform for the Muslim Brotherhood, Hamas, and other Islamists to promote their ideology. In 2011, at the same time as the Arab Spring, Al Jazeera reached the peak of its popularity. Al Jazeera's live coverage of protests in Tunisia, Egypt, Yemen, Libya, and Syria helped create a unified narrative of the Arab Spring, with audiences tuning in every Friday to watch simultaneous protests across the Middle East. Al Jazeera operates in three groups: First, it is a powerful base in the hands of the government so that it can direct and influence Arab public opinion. Therefore, this network has been able to benefit from the unlimited financial support of the Qatar government to promote its desired policies and culture. Second, it has provided an attractive platform for politicians and scientific and intellectual elites, thus attracting their support and defense from the government and its rulers. Third, during the last years of Prince Hamad's reign, the Al Jazeera network formed a deterrent weapon to counter the media and political struggle campaigns. The importance of the research is that this network covers a wide range of people in the Middle East and, therefore, has a high influence on the decision-making of countries. On the other hand, Al Jazeera is influential as a tool of public diplomacy and soft power in Qatar's foreign policy, and by studying it, the results of its effectiveness in the past years can be examined. Using a qualitative method, this research analyzes the impact of the media on the implementation of Qatar's foreign policy on the public opinion of the people of the Middle East. Data collection has been done by the secondary method, that is, reading related books, magazine articles, newspaper reports and articles, and analytical reports of think tanks. The most important findings of the research are that Al Jazeera plays an important role in Qatar's foreign policy in Qatar's public diplomacy. So that, in 2011, 2017 and 2023, it played an important role in Qatar's foreign policy in various crises. Also, the people of Arab countries use Al-Jazeera as their first reference.Keywords: Al Jazeera, Qatar, media, diplomacy
Procedia PDF Downloads 78164 Livelihood Security and Mitigating Climate Changes in the Barind Tract of Bangladesh through Agroforestry Systems
Authors: Md Shafiqul Bari, Md Shafiqul Islam Sikdar
Abstract:
This paper summarizes the current knowledge on Agroforestry practices in the Barind tract of Bangladesh. The part of greater Rajshahi, Dinajpur, Rangpur and Bogra district of Bangladesh is geographically identified as the Barind tract. The hard red soil of these areas is very significant in comparison to that of the other parts of the country. A typical dry climate with comparatively high temperature prevails in the Barind area. Scanty rainfall and excessive extraction of groundwater have created an alarming situation among the Barind people and others about irrigation to the rice field. In addition, the situation may cause an adverse impact on the people whose livelihood largely depends on agriculture. The groundwater table has been declined by at least 10 to 15 meters in some areas of the Barind tract during the last 20 years. Due to absent of forestland in the Barind tract, the soil organic carbon content can decrease more rapidly because of the higher rate of decomposition. The Barind soils are largely carbon depleted but can be brought back to carbon-carrying capacity by bringing under suitable Agroforestry systems. Agroforestry has tremendous potential for carbon sequestration not only in above C biomass but also root C biomass in deeper soil depths. Agroforestry systems habitually conserve soil organic carbon and maintain a great natural nutrient pool. Cultivation of trees with arable crops under Agroforestry systems help in improving soil organic carbon content and sequestration carbon, particularly in the highly degraded Barind lands. Agroforestry systems are a way of securing the growth of cash crops that may constitute an alternative source of income in moments of crisis. Besides being a source of fuel wood, a greater presence of trees in cropping system contributes to decreasing temperatures and to increasing rainfall, thus contrasting the negative environmental impact of climate changes. In order to fulfill the objectives of this study, two experiments were conducted. The first experiment was survey on the impact of existing agroforestry system on the livelihood security in the Barind tract of Bangladesh and the second one was the role of agroforestry system on the improvement of soil properties in a multilayered coconut orchard. Agroforestry systems have been generated a lot of employment opportunities in the Barind area. More crops mean involvement of more people in various activities like involvements in dairying, sericulture, apiculture and additional associated agro-based interventions. Successful adoption of Agroforestry practices in the Barind area has shown that the Agroforestry practitioners of this area were very sound positioned economically, and had added social status too. However, from the findings of the present study, it may be concluded that the majority rural farmers of the Barind tract of Bangladesh had a very good knowledge and medium extension contact related to agroforestry production system. It was also observed that 85 per cent farmers followed agroforestry production system and received benefits to a higher extent. Again, from the research study on orchard based mutistoried agroforestry cropping system, it was evident that there was an important effect of agroforestry cropping systems on the improvement of soil chemical properties. As a result, the agroforestry systems may be helpful to attain the development objectives and preserve the biosphere core.Keywords: agroforestry systems, Barind tract, carbon sequestration, climate changes
Procedia PDF Downloads 199163 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data
Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira
Abstract:
Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC
Procedia PDF Downloads 125162 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 89161 A Bioinspired Anti-Fouling Coating for Implantable Medical Devices
Authors: Natalie Riley, Anita Quigley, Robert M. I. Kapsa, George W. Greene
Abstract:
As the fields of medicine and bionics grow rapidly in technological advancement, the future and success of it depends on the ability to effectively interface between the artificial and the biological worlds. The biggest obstacle when it comes to implantable, electronic medical devices, is maintaining a ‘clean’, low noise electrical connection that allows for efficient sharing of electrical information between the artificial and biological systems. Implant fouling occurs with the adhesion and accumulation of proteins and various cell types as a result of the immune response to protect itself from the foreign object, essentially forming an electrical insulation barrier that often leads to implant failure over time. Lubricin (LUB) functions as a major boundary lubricant in articular joints, a unique glycoprotein with impressive anti-adhesive properties that self-assembles to virtually any substrate to form a highly ordered, ‘telechelic’ polymer brush. LUB does not passivate electroactive surfaces which makes it ideal, along with its innate biocompatibility, as a coating for implantable bionic electrodes. It is the aim of the study to investigate LUB’s anti-fouling properties and its potential as a safe, bioinspired material for coating applications to enhance the performance and longevity of implantable medical devices as well as reducing the frequency of implant replacement surgeries. Native, bovine-derived LUB (N-LUB) and recombinant LUB (R-LUB) were applied to gold-coated mylar surfaces. Fibroblast, chondrocyte and neural cell types were cultured and grown on the coatings under both passive and electrically stimulated conditions to test the stability and anti-adhesive property of the LUB coating in the presence of an electric field. Lactate dehydrogenase (LDH) assays were conducted as a directly proportional cell population count on each surface along with immunofluorescent microscopy to visualize cells. One-way analysis of variance (ANOVA) with post-hoc Tukey’s test was used to test for statistical significance. Under both passive and electrically stimulated conditions, LUB significantly reduced cell attachment compared to bare gold. Comparing the two coating types, R-LUB reduced cell attachment significantly compared to its native counterpart. Immunofluorescent micrographs visually confirmed LUB’s antiadhesive property, R-LUB consistently demonstrating significantly less attached cells for both fibroblasts and chondrocytes. Preliminary results investigating neural cells have so far demonstrated that R-LUB has little effect on reducing neural cell attachment; the study is ongoing. Recombinant LUB coatings demonstrated impressive anti-adhesive properties, reducing cell attachment in fibroblasts and chondrocytes. These findings and the availability of recombinant LUB brings into question the results of previous experiments conducted using native-derived LUB, its potential not adequately represented nor realized due to unknown factors and impurities that warrant further study. R-LUB is stable and maintains its anti-fouling property under electrical stimulation, making it suitable for electroactive surfaces.Keywords: anti-fouling, bioinspired, cell attachment, lubricin
Procedia PDF Downloads 123160 A Textile-Based Scaffold for Skin Replacements
Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel
Abstract:
The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization
Procedia PDF Downloads 256159 Pre-conditioning and Hot Water Sanitization of Reverse Osmosis Membrane for Medical Water Production
Authors: Supriyo Das, Elbir Jove, Ajay Singh, Sophie Corbet, Noel Carr, Martin Deetz
Abstract:
Water is a critical commodity in the healthcare and medical field. The utility of medical-grade water spans from washing surgical equipment, drug preparation to the key element of life-saving therapy such as hydrotherapy and hemodialysis for patients. A properly treated medical water reduces the bioburden load and mitigates the risk of infection, ensuring patient safety. However, any compromised condition during the production of medical-grade water can create a favorable environment for microbial growth putting patient safety at high risk. Therefore, proper upstream treatment of the medical water is essential before its application in healthcare, pharma and medical space. Reverse Osmosis (RO) is one of the most preferred treatments within healthcare industries and is recommended by all International Pharmacopeias to achieve the quality level demanded by global regulatory bodies. The RO process can remove up to 99.5% of constituents from feed water sources, eliminating bacteria, proteins and particles sizes of 100 Dalton and above. The combination of RO with other downstream water treatment technologies such as Electrodeionization and Ultrafiltration meet the quality requirements of various pharmacopeia monographs to produce highly purified water or water for injection for medical use. In the reverse osmosis process, the water from a liquid with a high concentration of dissolved solids is forced to flow through an especially engineered semi-permeable membrane to the low concentration side, resulting in high-quality grade water. However, these specially engineered RO membranes need to be sanitized either chemically or at high temperatures at regular intervals to keep the bio-burden at the minimum required level. In this paper, we talk about Dupont´s FilmTec Heat Sanitizable Reverse Osmosis membrane (HSRO) for the production of medical-grade water. An HSRO element must be pre-conditioned prior to initial use by exposure to hot water (80°C-85°C) for its stable performance and to meet the manufacturer’s specifications. Without pre-conditioning, the membrane will show variations in feed pressure operations and salt rejection. The paper will discuss the critical variables of pre-conditioning steps that can affect the overall performance of the HSRO membrane and demonstrate the data to support the need for pre-conditioning of HSRO elements. Our preliminary data suggests that there can be up to 35 % reduction in flow due to initial heat treatment, which also positively affects the increase in salt rejection. The paper will go into detail about the fundamental understanding of the performance change of HSRO after the pre-conditioning step and its effect on the quality of medical water produced. The paper will also discuss another critical point, “regular hot water sanitization” of these HSRO membranes. Regular hot water sanitization (at 80°C-85°C) is necessary to keep the membrane bioburden free; however, it can negatively impact the performance of the membrane over time. We will demonstrate several data points on hot water sanitization using FilmTec HSRO elements and challenge its robustness to produce quality medical water. The last part of this paper will discuss the construction details of the FilmTec HSRO membrane and features that make it suitable to pre-condition and sanitize at high temperatures.Keywords: heat sanitizable reverse osmosis, HSRO, medical water, hemodialysis water, water for Injection, pre-conditioning, heat sanitization
Procedia PDF Downloads 209158 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 145157 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 176156 Absorptive Capabilities in the Development of Biopharmaceutical Industry: The Case of Bioprocess Development and Research Unit, National Polytechnic Institute
Authors: Ana L. Sánchez Regla, Igor A. Rivera González, María del Pilar Monserrat Pérez Hernández
Abstract:
The ability of an organization to identify and get useful information from external sources, assimilate it, transform and apply to generate products or services with added value is called absorptive capacity. Absorptive capabilities contribute to have market opportunities to firms and get a leader position with respect to others competitors. The Bioprocess Development and Research Unit (UDIBI) is a Research and Development (R&D) laboratory that belongs to the National Polytechnic Institute (IPN), which is a higher education institute in Mexico. The UDIBI was created with the purpose of carrying out R and D activities for the Transferon®, a biopharmaceutical product developed and patented by IPN. The evolution of competence and scientific and technological platform made UDIBI expand its scope by providing technological services (preclínical studies and bio-compatibility evaluation) to the national pharmaceutical industry and biopharmaceutical industry. The relevance of this study is that those industries are classified as high scientific and technological intensity, and yet, after a review of the state of the art, there is only one study of absorption capabilities in biopharmaceutical industry with a similar scope to this research; in the case of Mexico, there is none. In addition to this, UDIBI belongs to a public university and its operation does not depend on the federal budget, but on the income generated by its external technological services. This fact represents a highly remarkable case in Mexico's public higher education context. This current doctoral research (2015-2019) is contextualized within a case study, its main objective is to identify and analyze the absorptive capabilities that characterise the UDIBI that allows it had become in a one of two third authorized laboratory by the sanitary authority in Mexico for developed bio-comparability studies to bio-pharmaceutical products. The development of this work in the field is divided into two phases. In a first phase, 15 interviews were conducted with the UDIBI personnel, covering management levels, heads of services, project leaders and laboratory personnel. These interviews were structured under a questionnaire, which was designed to integrate open questions and to a lesser extent, others, whose answers would be answered on a Likert-type rating scale. From the information obtained in this phase, a scientific article was made (in review and a proposal of presentation was submitted in different academic forums. A second stage will be made from the conduct of an ethnographic study within this organization under study that will last about 3 months. On the other hand, it is intended to carry out interviews with external actors around the UDIBI (suppliers, advisors, IPN officials, including contact with an academic specialized in absorption capacities to express their comments on this thesis. The inicial findings had shown two lines: i) exist institutional, technological and organizational management elements that encourage and/or limit the creation of absorption capacities in this scientific and technological laboratory and, ii) UDIBI has had created a set of multiple transfer technology of knowledge mechanisms which have had permitted to build a huge base of prior knowledge.Keywords: absorptive capabilities, biopharmaceutical industry, high research and development intensity industries, knowledge management, transfer of knowledge
Procedia PDF Downloads 225155 Environmentally Sustainable Transparent Wood: A Fully Green Approach from Bleaching to Impregnation for Energy-Efficient Engineered Wood Components
Authors: Francesca Gullo, Paola Palmero, Massimo Messori
Abstract:
Transparent wood is considered a promising structural material for the development of environmentally friendly, energy-efficient engineered components. To obtain transparent wood from natural wood materials two approaches can be used: i) bottom-up and ii) top-down. Through the second method, the color of natural wood samples is lightened through a chemical bleaching process that acts on chromophore groups of lignin, such as the benzene ring, quinonoid, vinyl, phenolics, and carbonyl groups. These chromophoric units form complex conjugate systems responsible for the brown color of wood. There are two strategies to remove color and increase the whiteness of wood: i) lignin removal and ii) lignin bleaching. In the lignin removal strategy, strong chemicals containing chlorine (chlorine, hypochlorite, and chlorine dioxide) and oxidizers (oxygen, ozone, and peroxide) are used to completely destroy and dissolve the lignin. In lignin bleaching methods, a moderate reductive (hydrosulfite) or oxidative (hydrogen peroxide) is commonly used to alter or remove the groups and chromophore systems of lignin, selectively discoloring the lignin while keeping the macrostructure intact. It is, therefore, essential to manipulate nanostructured wood by precisely controlling the nanopores in the cell walls by monitoring both chemical treatments and process conditions, for instance, the treatment time, the concentration of chemical solutions, the pH value, and the temperature. The elimination of wood light scattering is the second step in the fabrication of transparent wood materials, which can be achieved through two-step approaches: i) the polymer impregnation method and ii) the densification method. For the polymer impregnation method, the wood scaffold is treated with polymers having a corresponding refractive index (e.g., PMMA and epoxy resins) under vacuum to obtain the transparent composite material, which can finally be pressed to align the cellulose fibers and reduce interfacial defects in order to have a finished product with high transmittance (>90%) and excellent light-guiding. However, both the solution-based bleaching and the impregnation processes used to produce transparent wood generally consume large amounts of energy and chemicals, including some toxic or pollutant agents, and are difficult to scale up industrially. Here, we report a method to produce optically transparent wood by modifying the lignin structure with a chemical reaction at room temperature using small amounts of hydrogen peroxide in an alkaline environment. This method preserves the lignin, which results only deconjugated and acts as a binder, providing both a strong wood scaffold and suitable porosity for infiltration of biobased polymers while reducing chemical consumption, the toxicity of the reagents used, polluting waste, petroleum by-products, energy and processing time. The resulting transparent wood demonstrates high transmittance and low thermal conductivity. Through the combination of process efficiency and scalability, the obtained materials are promising candidates for application in the field of construction for modern energy-efficient buildings.Keywords: bleached wood, energy-efficient components, hydrogen peroxide, transparent wood, wood composites
Procedia PDF Downloads 52