Search results for: dynamic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7034

Search results for: dynamic algorithm

944 Integration of Agile Philosophy and Scrum Framework to Missile System Design Processes

Authors: Misra Ayse Adsiz, Selim Selvi

Abstract:

In today's world, technology is competing with time. In order to catch up with the world's companies and adapt quickly to the changes, it is necessary to speed up the processes and keep pace with the rate of change of the technology. The missile system design processes, which are handled with classical methods, keep behind in this race. Because customer requirements are not clear, and demands are changing again and again in the design process. Therefore, in the system design process, a methodology suitable for the missile system design dynamics has been investigated and the processes used for catching up the era are examined. When commonly used design processes are analyzed, it is seen that any one of them is dynamic enough for today’s conditions. So a hybrid design process is established. After a detailed review of the existing processes, it is decided to focus on the Scrum Framework and Agile Philosophy. Scrum is a process framework. It is focused on to develop software and handling change management with rapid methods. In addition, agile philosophy is intended to respond quickly to changes. In this study, it is aimed to integrate Scrum framework and agile philosophy, which are the most appropriate ways for rapid production and change adaptation, into the missile system design process. With this approach, it is aimed that the design team, involved in the system design processes, is in communication with the customer and provide an iterative approach in change management. These methods, which are currently being used in the software industry, have been integrated with the product design process. A team is created for system design process. The roles of Scrum Team are realized with including the customer. A scrum team consists of the product owner, development team and scrum master. Scrum events, which are short, purposeful and time-limited, are organized to serve for coordination rather than long meetings. Instead of the classic system design methods used in product development studies, a missile design is made with this blended method. With the help of this design approach, it is become easier to anticipate changing customer demands, produce quick solutions to demands and combat uncertainties in the product development process. With the feedback of the customer who included in the process, it is worked towards marketing optimization, design and financial optimization.

Keywords: agile, design, missile, scrum

Procedia PDF Downloads 147
943 VeriFy: A Solution to Implement Autonomy Safely and According to the Rules

Authors: Michael Naderhirn, Marco Pavone

Abstract:

Problem statement, motivation, and aim of work: So far, the development of control algorithms was done by control engineers in a way that the controller would fit a specification by testing. When it comes to the certification of an autonomous car in highly complex scenarios, the challenge is much higher since such a controller must mathematically guarantee to implement the rules of the road while on the other side guarantee aspects like safety and real time executability. What if it becomes reality to solve this demanding problem by combining Formal Verification and System Theory? The aim of this work is to present a workflow to solve the above mentioned problem. Summary of the presented results / main outcomes: We show the usage of an English like language to transform the rules of the road into system specification for an autonomous car. The language based specifications are used to define system functions and interfaces. Based on that a formal model is developed which formally correctly models the specifications. On the other side, a mathematical model describing the systems dynamics is used to calculate the systems reachability set which is further used to determine the system input boundaries. Then a motion planning algorithm is applied inside the system boundaries to find an optimized trajectory in combination with the formal specification model while satisfying the specifications. The result is a control strategy which can be applied in real time independent of the scenario with a mathematical guarantee to satisfy a predefined specification. We demonstrate the applicability of the method in simulation driving scenarios and a potential certification. Originality, significance, and benefit: To the authors’ best knowledge, it is the first time that it is possible to show an automated workflow which combines a specification in an English like language and a mathematical model in a mathematical formal verified way to synthesizes a controller for potential real time applications like autonomous driving.

Keywords: formal system verification, reachability, real time controller, hybrid system

Procedia PDF Downloads 219
942 Visualization of PM₂.₅ Time Series and Correlation Analysis of Cities in Bangladesh

Authors: Asif Zaman, Moinul Islam Zaber, Amin Ahsan Ali

Abstract:

In recent years of industrialization, the South Asian countries are being affected by air pollution due to a severe increase in fine particulate matter 2.5 (PM₂.₅). Among them, Bangladesh is one of the most polluting countries. In this paper, statistical analyses were conducted on the time series of PM₂.₅ from various districts in Bangladesh, mostly around Dhaka city. Research has been conducted on the dynamic interactions and relationships between PM₂.₅ concentrations in different zones. The study is conducted toward understanding the characteristics of PM₂.₅, such as spatial-temporal characterization, correlation of other contributors behind air pollution such as human activities, driving factors and environmental casualties. Clustering on the data gave an insight on the districts groups based on their AQI frequency as representative districts. Seasonality analysis on hourly and monthly frequency found higher concentration of fine particles in nighttime and winter season, respectively. Cross correlation analysis discovered a phenomenon of correlations among cities based on time-lagged series of air particle readings and visualization framework is developed for observing interaction in PM₂.₅ concentrations between cities. Significant time-lagged correlations were discovered between the PM₂.₅ time series in different city groups throughout the country by cross correlation analysis. Additionally, seasonal heatmaps depict that the pooled series correlations are less significant in warmer months, and among cities of greater geographic distance as well as time lag magnitude and direction of the best shifted correlated particulate matter time series among districts change seasonally. The geographic map visualization demonstrates spatial behaviour of air pollution among districts around Dhaka city and the significant effect of wind direction as the vital actor on correlated shifted time series. The visualization framework has multipurpose usage from gathering insight of general and seasonal air quality of Bangladesh to determining the pathway of regional transportation of air pollution.

Keywords: air quality, particles, cross correlation, seasonality

Procedia PDF Downloads 89
941 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling

Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou

Abstract:

The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.

Keywords: Web-BIM, safety management, deep foundation pit, construction

Procedia PDF Downloads 129
940 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 123
939 Narrative Identity Predicts Borderline Personality Disorder Features in Inpatient Adolescents up to Six Months after Admission

Authors: Majse Lind, Carla Sharp, Salome Vanwoerden

Abstract:

Narrative identity is the dynamic and evolving story individuals create about their personal pasts, presents, and presumed futures. This storied sense of self develops in adolescence and is crucial for fostering a sense of self-unity and purpose in life. A growing body of work has shown that several characteristics of narrative identity are disturbed in adults suffering from borderline personality disorder (BPD). Very little research, however, has explored the stories told by adolescents with BPD features. Investigating narrative identity early in the lifespan and in relation to personality pathology is crucial; BPD is a developmental disorder with early signs appearing already in adolescence. In the current study, we examine narrative identity (focusing on themes of agency and communion) coded from self-defining memories derived from the child attachment interview in 174 inpatient adolescents (M = 15.12, SD = 1.52) at the time of admission. The adolescents’ social cognition was further assessed on the basis of their reactions to movie scenes (i.e., the MASC movie task). They also completed a trauma checklist and self-reported BPD features at three different time points (i.e., at admission, at discharge, and 6 months after admission). Preliminary results show that adolescents who told stories containing themes of agency and communion evinced better social cognition, and lower emotional abuse on the trauma checklist. In addition, adolescents who disclosed stories containing lower levels of agency and communion demonstrated more BPD symptoms at all three time points, even when controlling for the occurrence of traumatic life events. Surprisingly, social cognitive abilities were not significantly associated with BPD features. These preliminary results underscore the importance of narrative identity as an indicator, and potential cause, of incipient personality pathology. Thus, focusing on diminished themes of narrative-based agency and communion in early adolescence could be crucial in preventing the development of personality pathology over time.

Keywords: borderline personality disorder, inpatient adolescents, narrative identity, follow-ups

Procedia PDF Downloads 130
938 Alternative Ways of Knowing and the Construction of a Department Around a Common Critical Lens

Authors: Natalie Delia

Abstract:

This academic paper investigates the transformative potential of incorporating alternative ways of knowing within the framework of Critical Studies departments. Traditional academic paradigms often prioritize empirical evidence and established methodologies, potentially limiting the scope of critical inquiry. In response to this, our research seeks to illuminate the benefits and challenges associated with integrating alternative epistemologies, such as indigenous knowledge systems, artistic expressions, and experiential narratives. Drawing upon a comprehensive review of literature and case studies, we examine how alternative ways of knowing can enrich and diversify the intellectual landscape of Critical Studies departments. By embracing perspectives that extend beyond conventional boundaries, departments may foster a more inclusive and holistic understanding of critical issues. Additionally, we explore the potential impact on pedagogical approaches, suggesting that alternative ways of knowing can stimulate alternative way of teaching methods and enhance student engagement. Our investigation also delves into the institutional and cultural shifts necessary to support the integration of alternative epistemologies within academic settings. We address concerns related to validation, legitimacy, and the potential clash with established norms, offering insights into fostering an environment that encourages intellectual pluralism. Furthermore, the paper considers the implications for interdisciplinary collaboration and the potential for cultivating a more responsive and socially engaged scholarship. By encouraging a synthesis of diverse perspectives, Critical Studies departments may be better equipped to address the complexities of contemporary issues, encouraging a dynamic and evolving field of study. In conclusion, this paper advocates for a paradigm shift within Critical Studies departments towards a more inclusive and expansive approach to knowledge production. By embracing alternative ways of knowing, departments have the opportunity to not only diversify their intellectual landscape but also to contribute meaningfully to broader societal dialogues, addressing pressing issues with renewed depth and insight.

Keywords: critical studies, alternative ways of knowing, academic department, Wallerstein

Procedia PDF Downloads 37
937 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 131
936 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 403
935 Powering Profits: A Dynamic Approach to Sales Marketing and Electronics

Authors: Muhammad Awais Kiani, Maryam Kiani

Abstract:

This abstract explores the confluence of these two domains and highlights the key factors driving success in sales marketing for electronics. The abstract begins by digging into the ever-evolving landscape of consumer electronics, emphasizing how technological advancements and the growth of smart devices have revolutionized the way people interact with electronics. This paradigm shift has created tremendous opportunities for sales and marketing professionals to engage with consumers on various platforms and channels. Next, the abstract discusses the pivotal role of effective sales marketing strategies in the electronics industry. It highlights the importance of understanding consumer behavior, market trends, and competitive landscapes and how this knowledge enables businesses to tailor their marketing efforts to specific target audiences. Furthermore, the abstract explores the significance of leveraging digital marketing techniques, such as social media advertising, search engine optimization, and influencer partnerships, to establish brand identity and drive sales in the electronics market. It emphasizes the power of storytelling and creating captivating content to engage with tech-savvy consumers. Additionally, the abstract emphasizes the role of customer relationship management (CRM) systems and data analytics in optimizing sales marketing efforts. It highlights the importance of leveraging customer insights and analyzing data to personalize marketing campaigns, enhance customer experience, and ultimately drive sales growth. Lastly, the abstract concludes by underlining the importance of adapting to the ever-changing landscape of the electronics industry. It encourages businesses to embrace innovation, stay informed about emerging technologies, and continuously evolve their sales marketing strategies to meet the evolving needs and expectations of consumers. Overall, this abstract sheds light on the captivating realm of sales marketing in the electronics industry, emphasizing the need for creativity, adaptability, and a deep understanding of consumers to succeed in this rapidly evolving market.

Keywords: marketing industry, electronics, sales impact, e-commerce

Procedia PDF Downloads 49
934 Evolving Urban Landscapes: Smart Cities and Sustainable Futures

Authors: Mehrzad Soltani, Pegah Rezaei

Abstract:

In response to the escalating challenges posed by resource scarcity, urban congestion, and the dearth of green spaces, contemporary urban areas have undergone a remarkable transformation into smart cities. This evolution necessitates a strategic and forward-thinking approach to urban development, with the primary objective of diminishing and eventually eradicating dependence on non-renewable energy sources. This steadfast commitment to sustainable development is geared toward the continual enhancement of our global urban milieu, ensuring a healthier and more prosperous environment for forthcoming generations. This transformative vision has been meticulously shaped by an extensive research framework, incorporating in-depth field studies and investigations conducted at both neighborhood and city levels. Our holistic strategy extends its purview to encompass major cities and states, advocating for the realization of exceptional development firmly rooted in the principles of sustainable intelligence. At its core, this approach places a paramount emphasis on stringent pollution control measures, concurrently safeguarding ecological equilibrium and regional cohesion. Central to the realization of this vision is the widespread adoption of environmentally friendly materials and components, championing the cultivation of plant life and harmonious green spaces, and the seamless integration of intelligent lighting and irrigation systems. These systems, including solar panels and solar energy utilization, are deployed wherever feasible, effectively meeting the essential lighting and irrigation needs of these dynamic urban ecosystems. Overall, the transformation of urban areas into smart cities necessitates a holistic and innovative approach to urban development. By actively embracing sustainable intelligence and adhering to strict environmental standards, these cities pave the way for a brighter and more sustainable future, one that is marked by resilient, thriving, and eco-conscious urban communities.

Keywords: smart city, green urban, sustainability, urban management

Procedia PDF Downloads 45
933 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies

Authors: Tania Viju, Bimal P., Naseer M. A.

Abstract:

This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.

Keywords: decision support system, dynamic management, road accident blackspots, road safety

Procedia PDF Downloads 112
932 Wetting Features of Butterflies Morpho Peleides and Anti-icing Behavior

Authors: Burdin Louise, Brulez Anne-Catherine, Mazurcyk Radoslaw, Leclercq Jean-louis, Benayoun Stéphane

Abstract:

By using a biomimetic approach, an investigation was conducted to determine the connections between morphology and wetting. The interest is focused on the Morpho peleides butterfly. This butterfly is already well-known among researchers for its brilliant iridescent color and has inspired numerous innovations. The intricate structure of its wings is responsible for such color. However, this multiscale structure exhibits a multitude of other features, such as hydrophobicity. Given the limited research on the wetting properties of Morpho butterfly, a detailed analysis of its wetting behavior is proposed. Multiscale surface topographies of the Morpho peleides butterfly were analyzed using scanning electron microscope and atomic force microscopy. To understand the relationship between morphology and wettability, a goniometer was employed to measured static and dynamic contact angle. Since several studies have consistently demonstrated that superhydrophobic surfaces can effectively delay freezing, icing delay time the Morpho’s wings was also measured. The results revealed contact angles close to 136°, indicating a high degree of hydrophobicity. Moreover, sliding angles (SA) were measured in different directions, including along and against the rolling-outward direction. The findings suggest anisotropic wetting. Specifically, when the wing was tilted along the rolling outward direction (i.e., away from the insect’s body) SA was about 7°. While, when the wing was tilted against the rolling outward direction, SA was about 29°. This phenomenon is directly linked to the butterfly’s survival strategy. To investigate the exclusive morphological impact on anti-icing properties, PDMS replicas of the Morpho butterfly were obtained. When compared to flat PDMS and microscale textured PDMS, Morpho replications exhibited a longer freezing time. Therefore, this could be a source of inspiration for designing superhydrophobic surfaces with anti-icing applications or functional surfaces with controlled wettability.

Keywords: biomimetic, anisotropic wetting, anti-icing, multiscale roughness

Procedia PDF Downloads 36
931 A Numerical Model for Simulation of Blood Flow in Vascular Networks

Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia

Abstract:

An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.

Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system

Procedia PDF Downloads 247
930 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms

Authors: Farhat Imtiaz, Umar Farooq

Abstract:

In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.

Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation

Procedia PDF Downloads 115
929 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 280
928 3D Codes for Unsteady Interaction Problems of Continuous Mechanics in Euler Variables

Authors: M. Abuziarov

Abstract:

The designed complex is intended for the numerical simulation of fast dynamic processes of interaction of heterogeneous environments susceptible to the significant formability. The main challenges in solving such problems are associated with the construction of the numerical meshes. Currently, there are two basic approaches to solve this problem. One is using of Lagrangian or Lagrangian Eulerian grid associated with the boundaries of media and the second is associated with the fixed Eulerian mesh, boundary cells of which cut boundaries of the environment medium and requires the calculation of these cut volumes. Both approaches require the complex grid generators and significant time for preparing the code’s data for simulation. In this codes these problems are solved using two grids, regular fixed and mobile local Euler Lagrange - Eulerian (ALE approach) accompanying the contact and free boundaries, the surfaces of shock waves and phase transitions, and other possible features of solutions, with mutual interpolation of integrated parameters. For modeling of both liquids and gases, and deformable solids the Godunov scheme of increased accuracy is used in Lagrangian - Eulerian variables, the same for the Euler equations and for the Euler- Cauchy, describing the deformation of the solid. The increased accuracy of the scheme is achieved by using 3D spatial time dependent solution of the discontinuity problem (3D space time dependent Riemann's Problem solver). The same solution is used to calculate the interaction at the liquid-solid surface (Fluid Structure Interaction problem). The codes does not require complex 3D mesh generators, only the surfaces of the calculating objects as the STL files created by means of engineering graphics are given by the user, which greatly simplifies the preparing the task and makes it convenient to use directly by the designer at the design stage. The results of the test solutions and applications related to the generation and extension of the detonation and shock waves, loading the constructions are presented.

Keywords: fluid structure interaction, Riemann's solver, Euler variables, 3D codes

Procedia PDF Downloads 416
927 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function

Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina

Abstract:

The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.

Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic

Procedia PDF Downloads 220
926 Detecting the Palaeochannels Based on Optical Data and High-Resolution Radar Data for Periyarriver Basin

Authors: S. Jayalakshmi, Gayathri S., Subiksa V., Nithyasri P., Agasthiya

Abstract:

Paleochannels are the buried part of an active river system which was separated from the active river channel by the process of cutoff or abandonment during the dynamic evolution of the active river. Over time, they are filled by young unconsolidated or semi-consolidated sediments. Additionally, it is impacted by geo morphological influences, lineament alterations, and other factors. The primary goal of this study is to identify the paleochannels in Periyar river basin for the year 2023. Those channels has a high probability in the presence of natural resources, including gold, platinum,tin,an duranium. Numerous techniques are used to map the paleochannel. Using the optical data, Satellite images were collected from various sources, which comprises multispectral satellite images from which indices such as Normalized Difference Vegetation Index (NDVI),Normalized Difference Water Index (NDWI), Soil Adjusted Vegetative Index (SAVI) and thematic layers such as Lithology, Stream Network, Lineament were prepared. Weights are assigned to each layer based on its importance, and overlay analysis has done, which concluded that the northwest region of the area has shown some paleochannel patterns. The results were cross-verified using the results obtained using microwave data. Using Sentinel data, Synthetic Aperture Radar (SAR) Image was extracted from European Space Agency (ESA) portal, pre-processed it using SNAP 6.0. In addition to that, Polarimetric decomposition technique has incorporated to detect the paleochannels based on its scattering property. Further, Principal component analysis has done for enhanced output imagery. Results obtained from optical and microwave radar data were compared and the location of paleochannels were detected. It resulted six paleochannels in the study area out of which three paleochannels were validated with the existing data published by Department of Geology and Environmental Science, Kerala. The other three paleochannels were newly detected with the help of SAR image.

Keywords: paleochannels, optical data, SAR image, SNAP

Procedia PDF Downloads 59
925 The Price of Knowledge in the Times of Commodification of Higher Education: A Case Study on the Changing Face of Education

Authors: Joanna Peksa, Faith Dillon-Lee

Abstract:

Current developments in the Western economies have turned some universities into corporate institutions driven by practices of production and commodity. Academia is increasingly becoming integrated into national economies as a result of students paying fees and is consequently using business practices in student retention and engagement. With these changes, pedagogy status as a priority within the institution has been changing in light of these new demands. New strategies have blurred the boundaries that separate a student from a client. This led to a change of the dynamic, disrupting the traditional idea of the knowledge market, and emphasizing the corporate aspect of universities. In some cases, where students are seen primarily as a customer, the purpose of academia is no longer to educate but sell a commodity and retain fee-paying students. This paper considers opposing viewpoints on the commodification of higher education, reflecting on the reality of maintaining a pedagogic grounding in an increasingly commercialized sector. By analysing a case study of the Student Success Festival, an event that involved academic and marketing teams, the differences are considered between the respective visions of the pedagogic arm of the university and the corporate. This study argues that the initial concept of the event, based on the principles of gamification, independent learning, and cognitive criticality, was more clearly linked to a grounded pedagogic approach. However, when liaising with the marketing team in a crucial step in the creative process, it became apparent that these principles were not considered a priority in terms of their remit. While the study acknowledges in the power of pedagogy, the findings show that a pact of concord is necessary between different stakeholders in order for students to benefit fully from their learning experience. Nevertheless, while issues of power prevail and whenever power is unevenly distributed, reaching a consensus becomes increasingly challenging and further research should closely monitor the developments in pedagogy in the UK higher education.

Keywords: economic pressure, commodification, pedagogy, gamification, public service, marketization

Procedia PDF Downloads 105
924 Seismic Retrofit of Reinforced Concrete Structures by Highly Dissipative Technologies

Authors: Stefano Sorace, Gloria Terenzi, Giulia Mazzieri, Iacopo Costoli

Abstract:

The prolonged earthquake sequence that struck several urban agglomerations and villages in Central Italy, starting from 24 August 2016 through January 2017, highlighted once again the seismic vulnerability of pre-normative reinforced concrete (R/C) structures. At the same time, considerable damages were surveyed in recently retrofitted R/C buildings too, one of which also by means of a dissipative bracing system. The solution adopted for the latter did not expressly take into account the performance of non-structural elements, and namely of infills and partitions, confirming the importance of their dynamic interaction with the structural skeleton. Based on this consideration, an alternative supplemental damping-based retrofit solution for this representative building, i.e., a school with an R/C structure situated in the municipality of Norcia, is examined in this paper. It consists of the incorporation of dissipative braces equipped with pressurized silicone fluid viscous (FV) dampers, instead of the BRAD system installed in the building, the delayed activation of which -caused by the high stiffness of the constituting metallic dampers- determined the observed non-structural damages. Indeed, the alternative solution proposed herein, characterized by dissipaters with mainly damping mechanical properties, guarantees an earlier activation of the protective system. A careful assessment analysis, preliminarily carried out to simulate and check the case study building performance in originally BRAD-retrofitted conditions, confirms that the interstorey drift demand related to the Norcia earthquake's mainshock and aftershocks is beyond the response capacity of infills. The verification analyses developed on the R/C structure, including the FV-damped braces, highlight their higher performance, giving rise to a completely undamaged response both of structural and non-structural elements up to the basic design earthquake normative level of seismic action.

Keywords: dissipative technologies, performance assessment analysis, concrete structures, seismic retrofit

Procedia PDF Downloads 102
923 Nanoparticles Made from PNIPAM-G-PEO Double Hydrophilic Copolymers for Temperature-Controlled Drug Delivery

Authors: Victoria I. Michailova, Denitsa B. Momekova, Hristiana A. Velichkova, Evgeni H. Ivanov

Abstract:

The aim of this work is to design and develop thermo-responsive nanosized drug delivery systems based on poly(N-isopropylacrylamide)-g-poly(ethylene oxide) (PNIPAM-g-PEO) double hydrophilic graft copolymers. The PNIPAM-g-PEO copolymers are able to self-assemble in water into nanoparticles above the LCST of the thermo-responsive PNIPAM backbone and to disassemble and rapidly release the entrapped drugs upon cooling. However, their drug delivery applications are often hindered by their low loading capacity as the drugs to be encapsulated do not dissolve in water. In order to overcome this limitation, here we applied a low-temperature procedure with ethanol as an alternative route to the formation and loading a model hydrophobic drug, Indomethacin (IMC), into PNIPAM-g-PEO nanoparticles. The rationale for this approach was that ethanol dissolves both IMC and the copolymer and its mixing with water may induce micellization of PNIPAM-g-PEO at temperatures lower than the LCST. The influence of the volume fraction of ethanol and the temperature on the aggregation characteristics of PNIPAM-g-PEO copolymers (2.7 mol% PEO) was investigated by means of DLS, TEM and rheological dynamic oscillatory tests. The studies showed rich phase behavior at T < LCST, incl. the formation of highly solvated 500-1000 nm complex structures, 30-70 nm micelles and polymersomes as well as giant polymersomes, as the fraction of added ethanol increased. We believe that the PNIPAM-g-PEO self-assembly is favored due to the different solvation of its constituting blocks in ethanol-water mixtures. The incorporation of IMC led to alteration of the physicochemical and morphological characteristics of the blank nanoparticles. In this case, only monodisperse polymersomes and micelles were observed in the solutions with an average diameter less than 65 nm and substantial drug loading (DLC ~117 – 146 wt%). Indomethacin release from the nanoparticles was responsive to temperature changes, being much faster at a temperature of 42oC compared to that of 37oC under otherwise the same conditions. The results obtained suggest that these PNIPAM-g-PEO nanoparticles could be potential in mild hyper-thermic delivery of nonsteroidal anti-inflammatory drugs.

Keywords: drug delivery, nanoparticles, poly(N-isopropylacryl amide)-g-poly(ethylene oxide), thermo-responsive

Procedia PDF Downloads 259
922 Multi-Elemental Analysis Using Inductively Coupled Plasma Mass Spectrometry for the Geographical Origin Discrimination of Greek Giant Beans “Gigantes Elefantes”

Authors: Eleni C. Mazarakioti, Anastasios Zotos, Anna-Akrivi Thomatou, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas

Abstract:

“Gigantes Elefantes” is a particularly dynamic crop of giant beans cultivated in western Macedonia (Greece). This variety of large beans growing in this area and specifically in the regions of Prespes and Kastoria is a protected designation of origin (PDO) species with high nutritional quality. Mislabeling of geographical origin and blending with unidentified samples are common fraudulent practices in Greek food market with financial and possible health consequences. In the last decades, multi-elemental composition analysis has been used in identifying the geographical origin of foods and agricultural products. In an attempt to discriminate the authenticity of Greek beans, multi-elemental analysis (Ag, Al, As, B, Ba, Be, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, Ge, K, Li, Mg, Mn, Mo, Na, Nb, Ni, P, Pb, Rb, Re, Se, Sr, Ta, Ti, Tl, U, V, W, Zn, Zr) was performed by inductively coupled plasma mass spectrometry (ICP-MS) on 320 samples of beans, originated from Greece (Prespes and Kastoria), China and Poland. All samples were collected during the autumn of 2021. The obtained data were analysed by principal component analysis (PCA), an unsupervised statistical method, which allows for to reduce of the dimensionality of the enormous datasets. Statistical analysis revealed a clear separation of beans that had been cultivated in Greece compared with those from China and Poland. An adequate discrimination of geographical origin between bean samples originating from the two Greece regions, Prespes and Kastoria, was also evident. Our results suggest that multi-elemental analysis combined with the appropriate multivariate statistical method could be a useful tool for bean’s geographical authentication. Acknowledgment: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.

Keywords: geographical origin, authenticity, multi-elemental analysis, beans, ICP-MS, PCA

Procedia PDF Downloads 50
921 Lifetime Attachment: Adult Daughters Attachment to Their Old Mothers

Authors: Meltem Anafarta Şendağ, Funda Kutlu

Abstract:

Attachment theory has some major postulates that direct attention of psychologists from many different domains. First, the theory suggests that attachment is a lifetime process. This means that every human being from cradle to grave needs someone stronger to depend on in times of stress. Second, the attachment is a dynamic process and as one goes through developmental stages it is being transferred from one figure to another (friends, romantic partners). Third, the quality of attachment relationships later in time directly affected by the earliest attachment relationship established between the mother and the infant. Depending on these postulates, attachment literature focuses mostly on mother – child attachment during childhood and romantic relationship during adulthood. However, although romantic partners are important attachment figures in adults’ life, parents are not dropped out from the attachment hierarchy but they keep being important attachment figures. Despite the fact that parents could still be an important figure in adults’ life, adult – parent attachment is overlooked in the literature. Accordingly, this study focuses on adult daughters’ current attachment to their old mothers in relation with early parental bonding and current attachment to husbands. Participants of the study were 383 adult women (Average age = 40, ranging between 23 and 70) whose mothers were still alive and who were married at the time of the study. Participants were completed Adult Attachment Scale, Parental Bonding Instrument, and Experiences in Close Relationship – II together with demographic questionnaire. Results revealed that daughters’ attachment to their mothers weakens as they get older, have more children, and have longer marriages. Stronger attachment to mothers was found positively correlated with current satisfaction with the relationship, perception of maternal care before the age of 12 and negatively correlated with perception of controlling behavior before the age 12. Considering the relationship between current parental attachment and romantic attachment, it was found that as the current attachment to mother strengthens attachment avoidance towards husband decreases. Results revealed that although attachment between the adult daughters and old mothers weakens, the relationship is still critical in daughters’ lives. The strength of current attachment with the mother is related both with the early relationship with the mother and current attachment with the husband. The current study is thought to contribute to attachment theory emphasizing the attachment as a lifetime construct.

Keywords: adult daughter, attachment, old mothers, parental bonding

Procedia PDF Downloads 302
920 Prevalence and Hypertension Management among the Nomadic Migratory Community of Marsabit County, Kenya: Lessons Learned and Wayforward

Authors: Wesley Too, Christine Chesiror

Abstract:

Hypertension is a public health challenge that globally, with the World Health Organization estimating that by 2025, more than 1.5 billion people would have been diagnosed with it. Kenya’s prevalence of hypertension is estimated at 24.6 percent; however, 55% of the affected have uncontrolled blood pressure, which is worst in some parts of the country with different lifestyle: nomads and migratory communities. Kenyan pastoralists comprise 20% of the nation's population and are constantly on the move for search of water, pasture for their herd, and desertification have driven nomadic populations to the brink, given their unique and dynamic challenges. Nomads face myriad of challenges and barriers towards the management of their health care problems. Nomadic area is predominantly rural, with a low population density and a nomadic population. Health care access and quality are further hampered by poor telecommunications, infrastructure, and security. In Kenya, nomadic communities experience the worst health outcomes, disproportionate health disparities, and inequalities due to unresponsive, culturally sensitive health care system to nomad’s lifestyle and their health care needs. Marsabit covering a surface area of 66,923.1 km2, is the second largest county in Kenya, constituting about 2.3 million people of North-Eastern region, with only 2.3 percent and 1.9 percent of Kenya's total number of doctors and nurses in the country. In Kenya, there are scanty research on hypertension managementin this region and, at best, non-existent study on hypertension among nomads-migratory communities of Northern Kenya. Therefore, the purpose seeks to determine the prevalence of hypertension among nomads and document nomads' practices regarding early detections, management, and levels of control of hypertension in one of the Counties in Kenya with high- hypertensive case load per year. Methods: A cross-sectional study design was used to collect data from multiple sites and health facilities. A total of 260 participants were enrolled into the study. The study is currently ongoing. It is anticipated that by September, we will have initial findings & recommendations to share for conference

Keywords: pastoralists, hypertension, health, kenya

Procedia PDF Downloads 86
919 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.

Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA

Procedia PDF Downloads 130
918 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 72
917 Copolymers of Epsilon-Caprolactam Received via Anionic Polymerization in the Presence of Polypropylene Glycol Based Polymeric Activators

Authors: Krasimira N. Zhilkova, Mariya K. Kyulavska, Roza P. Mateva

Abstract:

The anionic polymerization of -caprolactam (CL) with bifunctional activators has been extensively studied as an effective and beneficial method of improving chemical and impact resistances, elasticity and other mechanical properties of polyamide (PA6). In presence of activators or macroactivators (MAs) also called polymeric activators (PACs) the anionic polymerization of lactams proceeds rapidly at a temperature range of 130-180C, well below the melting point of PA-6 (220C) permitting thus the direct manufacturing of copolymer product together with desired modifications of polyamide properties. Copolymers of PA6 with an elastic polypropylene glycol (PPG) middle block into main chain were successfully synthesized via activated anionic ring opening polymerization (ROP) of CL. Using novel PACs based on PPG polyols (with differ molecular weight) the anionic ROP of CL was realized and investigated in the presence of a basic initiator sodium salt of CL (NaCL). The PACs were synthesized as N-carbamoyllactam derivatives of hydroxyl terminated PPG functionalized with isophorone diisocyanate [IPh, 5-Isocyanato-1-(isocyanatomethyl)-1,3,3-trimethylcyclohexane] and blocked then with CL units via an addition reaction. The block copolymers were analyzed and proved with 1H-NMR and FT-IR spectroscopy. The influence of the CL/PACs ratio in feed, the length of the PPG segments and polymerization conditions on the kinetics of anionic ROP, on average molecular weight, and on the structure of the obtained block copolymers were investigated. The structure and phase behaviour of the copolymers were explored with differential scanning calorimetry, wide-angle X-ray diffraction, thermogravimetric analysis and dynamic mechanical thermal analysis. The crystallinity dependence of PPG content incorporated into copolymers main backbone was estimate. Additionally, the mechanical properties of the obtained copolymers were studied by notched impact test. From the performed investigation in this study could be concluded that using PPG based PACs at the chosen ROP conditions leads to obtaining well-defined PA6-b-PPG-b-PA6 copolymers with improved impact resistance.

Keywords: anionic ring opening polymerization, caprolactam, polyamide copolymers, polypropylene glycol

Procedia PDF Downloads 387
916 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 46
915 Lovely, Lyrical, Lilting: Kubrick’s Translation of Lolita’s Voice

Authors: Taylor La Carriere

Abstract:

“What I had madly possessed was not she, but my own creation, another, fanciful Lolita perhaps, more real than Lolita; overlapping, encasing he and having no will, no consciousness indeed, no life of her own,” Vladimir Nabokov writes in his seminal work, Lolita. Throughout Nabokov’s novel, the eponymous character is rendered nonexistent through unreliable narrator Humbert Humbert’s impenetrable narrative, infused with lyrical rationalization. Instead, Lolita is “safely solipsised,” as Humbert muses, solidifying the potential for the erasure of Lolita’s agency and identity. In this literary work, Lolita’s voice is reduced to a nearly invisible presence, only seen through the eyes of her captor. However, in Stanley Kubrick’s film adaptation of Lolita (1962), the “nymphet,” as Nabokov coins, reemerges with a voice of her own, fueled by a lyric impulse, that displaces Humbert’s first-person narration. The lyric, as defined by Catherine Ing, is the voice of the invisible; it is also characterized by performance, the concentrated utterance of individual emotion, and the appearance of spontaneity. The novel’s lyricism is largely in the service of Humbert’s “seductive” voice, while the film reorients it more to Lolita’s subjectivity. Through a close analysis of Kubrick’s cinematic techniques, this paper examines the emergence and translation of Lolita’s voice in contrast with Humbert’s attempts to silence her in Nabokov’s Lolita, hypothesizing that Kubrick translates Lolita’s presence into a visual and aural voice with lyrical attributes, exemplified through the establishment of an altered power dynamic, Sue Lyon’s transformative performance as the titular character, Nelson Riddle and Bob Harris’ musical score, and the omission of Humbert’s first-person point-of-view. In doing so, the film reclaims Lolita’s agency by taking instances of Lolita’s voice in the novel as depicted in the last half of the work and expanding upon them in a way only cinematic depictions could allow. The results of this study suggest that Lolita’s voice in Kubrick’s adaptation functions without disrupting the lyricism present in Nabokov’s source text, materializing through the actions, expressions, and performance of Sue Lyon in the film. This voice, fueled by a lyric impulse of its own, refutes the silence bestowed upon the titular character and enables its ultimate reclamation upon the silver screen.

Keywords: cinema, adaptation, Lolita, lyric voice

Procedia PDF Downloads 171