Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4848

Search results for: linear complexity

1038 Food and Nutritional Security in the Context of Climate Change in Ethiopia: Using Household Panel Data

Authors: Aemro Tazeze Terefe, Mengistu K. Aredo, Abule M. Workagegnehu, Wondimagegn M. Tesfaye

Abstract:

Climate-induced shocks have been shown to reduce agricultural production and cause fluctuation in output in developing countries. When livelihoods depend on rain-fed agriculture, climate-induced shocks translate into consumption shocks. Despite the substantial improvements in household consumption, climate-induced shocks, and other factors adversely affect consumption dynamics at the household level in Ethiopia. Therefore, household consumption dynamics in the context of climate-induced shocks help to guide resilience capacity and establish appropriate interventions and programs. The research employed three-round panel data based on the Ethiopian Socioeconomic Survey with spatial rainfall data to define unique measures of rainfall variability. The linear dynamic panel model results show that the lagged value of consumption, market shocks, and rainfall variability positively affected consumption dynamics. In contrast, production shocks, temperature, and amount of rainfall had a negative relationship. Coping strategies mitigate adverse climate-induced shocks on consumption aftershocks that smooth consumption over time. Support to increase the resilience capacity of households can involve efforts to make existing livelihoods and forms of production or reductions in the vulnerability of households. Therefore, government interventions are mandatory for asset accumulation agendas that support household coping strategies and respond to shocks. In addition, the dynamic linkage between consumption and significant socioeconomic and institutional factors should be taken into account to minimize the effect of climate-induced shocks on consumption dynamics.

Keywords: climate shock, Ethiopia, fixed-effect model, food security

Procedia PDF Downloads 101
1037 ISO 9001:2008 Effectiveness on the Performance of Public Organizations in Oman

Authors: Said Rashid Aal Abdulsallam

Abstract:

The purpose of this paper is to measure ISO 9001:2008 effectiveness and determines its impact on the performance dimensions in terms of service quality, operational performance and customer satisfaction from the perspectives of both service providers and receivers. The paper is based on an empirical study carried out on all the ISO 9001:2008 certified departments in the Ministry of Education in the Sultanate of Oman. Data were obtained from the certified departments and their equivalent clients through two structured online questionnaires. Exploratory factor analyses are applied to extract the underlying factors of the indicators of ISO 9001 objectives and performance dimensions. Multiple linear regression analyses are also applied in order to determine the impact of ISO 9001 effectiveness on the performance dimensions of the certified departments. The study sample includes all the ISO 9001 certified departments in the Ministry of Education. The study instruments used target both the service providers as well as the service receivers with the purpose of alleviating the subjective nature of the data collected from the service providers who may be biased in favour of ISO 9001 quality management system or their performance. The findings of the study verify the effectiveness of the application of ISO 9001:2008 quality management system. Additionally, the study reveals that the ISO 9001 certified departments have achieved the ISO 9001 the standard's objectives including prevention of nonconformities, continuous improvement and customer satisfaction focus at different rates. The study also proves that there is a significant relation between the achievement of the ISO 9001 standard objectives and the operational performance of the departments. Even though the operational performance service quality of the ISO 9001 certified departments has substantially improved from the perspective of the departments, the customer satisfaction has not notably increased from the perspective of the service receivers.

Keywords: iso 9001, customer satisfaction, operational performance, public organization, quality management

Procedia PDF Downloads 391
1036 Modelling the Effect of Biomass Appropriation for Human Use on Global Biodiversity

Authors: Karina Reiter, Stefan Dullinger, Christoph Plutzar, Dietmar Moser

Abstract:

Due to population growth and changing patterns of production and consumption, the demand for natural resources and, as a result, the pressure on Earth’s ecosystems are growing. Biodiversity mapping can be a useful tool for assessing species endangerment or detecting hotspots of extinction risks. This paper explores the benefits of using the change in trophic energy flows as a consequence of the human alteration of the biosphere in biodiversity mapping. To this end, multiple linear regression models were developed to explain species richness in areas where there is no human influence (i.e. wilderness) for three taxonomic groups (birds, mammals, amphibians). The models were then applied to predict (I) potential global species richness using potential natural vegetation (NPPpot) and (II) global ‘actual’ species richness after biomass appropriation using NPP remaining in ecosystems after harvest (NPPeco). By calculating the difference between predicted potential and predicted actual species numbers, maps of estimated species richness loss were generated. Results show that biomass appropriation for human use can indeed be linked to biodiversity loss. Areas for which the models predicted high species loss coincide with areas where species endangerment and extinctions are recorded to be particularly high by the International Union for Conservation of Nature and Natural Resources (IUCN). Furthermore, the analysis revealed that while the species distribution maps of the IUCN Red List of Threatened Species used for this research can determine hotspots of biodiversity loss in large parts of the world, the classification system for threatened and extinct species needs to be revised to better reflect local risks of extinction.

Keywords: biodiversity loss, biomass harvest, human appropriation of net primary production, species richness

Procedia PDF Downloads 123
1035 Count Regression Modelling on Number of Migrants in Households

Authors: Tsedeke Lambore Gemecho, Ayele Taye Goshu

Abstract:

The main objective of this study is to identify the determinants of the number of international migrants in a household and to compare regression models for count response. This study is done by collecting data from total of 2288 household heads of 16 randomly sampled districts in Hadiya and Kembata-Tembaro zones of Southern Ethiopia. The Poisson mixed models, as special cases of the generalized linear mixed model, is explored to determine effects of the predictors: age of household head, farm land size, and household size. Two ethnicities Hadiya and Kembata are included in the final model as dummy variables. Stepwise variable selection has indentified four predictors: age of head, farm land size, family size and dummy variable ethnic2 (0=other, 1=Kembata). These predictors are significant at 5% significance level with count response number of migrant. The Poisson mixed model consisting of the four predictors with random effects districts. Area specific random effects are significant with the variance of about 0.5105 and standard deviation of 0.7145. The results show that the number of migrant increases with heads age, family size, and farm land size. In conclusion, there is a significantly high number of international migration per household in the area. Age of household head, family size, and farm land size are determinants that increase the number of international migrant in households. Community-based intervention is needed so as to monitor and regulate the international migration for the benefits of the society.

Keywords: Poisson regression, GLM, number of migrant, Hadiya and Kembata Tembaro zones

Procedia PDF Downloads 275
1034 Exploring the Vocabulary and Grammar Advantage of US American over British English Speakers at Age 2;0

Authors: Janine Just, Kerstin Meints

Abstract:

The research aims to compare vocabulary size and grammatical development between US American English- and British English-speaking children at age 2;0. As there is evidence that precocious children with large vocabularies develop grammar skills earlier than their typically developing peers, it was investigated if this also holds true across varieties of English. Thus, if US American children start to produce words earlier than their British counterparts, this could mean that US children are also at an advantage in the early developmental stages of acquiring grammar. This research employs a British English adaptation of the MacArthur-Bates CDI Words and Sentences (Lincoln Toddler CDI) to compare vocabulary and also grammar scores with the updated US Toddler CDI norms. At first, the Lincoln TCDI was assessed for its concurrent validity with the Preschool Language Scale (PLS-5 UK). This showed high correlations for the vocabulary and grammar subscales between the tests. In addition, the frequency of the Toddler CDI’s words was also compared using American and British English corpora of adult spoken and written language. A paired-samples t-test found a significant difference in word frequency between the British and the American CDI demonstrating that the TCDI’s words were indeed of higher frequency in British English. We then compared language and grammar scores between US (N = 135) and British children (N = 96). A two-way between groups ANOVA examined if the two samples differed in terms of SES (i.e. maternal education) by investigating the impact of SES and country on vocabulary and sentence complexity. The two samples did not differ in terms of maternal education as the interaction effects between SES and country were not significant. In most cases, scores were not significantly different between US and British children, for example, for overall word production and most grammatical subscales (i.e. use of words, over- regularizations, complex sentences, word combinations). However, in-depth analysis showed that US children were significantly better than British children at using some noun categories (i.e. people, objects, places) and several categories marking early grammatical development (i.e. pronouns, prepositions, quantifiers, helping words). However, the effect sizes were small. Significant differences for grammar were found for irregular word forms and progressive tense suffixes. US children were more advanced in their use of these grammatical categories, but the effect sizes were small. In sum, while differences exist in terms of vocabulary and grammar ability, favouring US children, effect sizes were small. It can be concluded that most British children are ‘catching up’ with their US American peers at age 2;0. Implications of this research will be discussed.

Keywords: first language acquisition, grammar, parent report instrument, vocabulary

Procedia PDF Downloads 270
1033 Layer-By-Layer Deposition of Poly(Ethylene Imine) Nanolayers on Polypropylene Nonwoven Fabric: Electrostatic and Thermal Properties

Authors: Dawid Stawski, Silviya Halacheva, Dorota Zielińska

Abstract:

The surface properties of many materials can be readily and predictably modified by the controlled deposition of thin layers containing appropriate functional groups and this research area is now a subject of widespread interest. The layer-by-layer (lbl) method involves depositing oppositely charged layers of polyelectrolytes onto the substrate material which are stabilized due to strong electrostatic forces between adjacent layers. This type of modification affords products that combine the properties of the original material with the superficial parameters of the new external layers. Through an appropriate selection of the deposited layers, the surface properties can be precisely controlled and readily adjusted in order to meet the requirements of the intended application. In the presented paper a variety of anionic (poly(acrylic acid)) and cationic (linear poly(ethylene imine), polymers were successfully deposited onto the polypropylene nonwoven using the lbl technique. The chemical structure of the surface before and after modification was confirmed by reflectance FTIR spectroscopy, volumetric analysis and selective dyeing tests. As a direct result of this work, new materials with greatly improved properties have been produced. For example, following a modification process significant changes in the electrostatic activity of a range of novel nanocomposite materials were observed. The deposition of polyelectrolyte nanolayers was found to strongly accelerate the loss of electrostatically generated charges and to increase considerably the thermal resistance properties of the modified fabric (the difference in T50% is over 20°C). From our results, a clear relationship between the type of polyelectrolyte layer deposited onto the flat fabric surface and the properties of the modified fabric was identified.

Keywords: layer-by-layer technique, polypropylene nonwoven, surface modification, surface properties

Procedia PDF Downloads 426
1032 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 151
1031 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures

Authors: Jelena R. Pejović, Nina N. Serdar

Abstract:

This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.

Keywords: ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building

Procedia PDF Downloads 360
1030 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.

Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics

Procedia PDF Downloads 119
1029 The Role of Social Capital and Dynamic Capabilities in a Circular Economy: Evidence from German Small and Medium-Sized Enterprises

Authors: Antonia Hoffmann, Andrea Stübner

Abstract:

Resource scarcity and rising material prices are forcing companies to rethink their business models. The conventional linear system of economic growth and rising social needs further exacerbates the problem of resource scarcity. Therefore, it is necessary to separate economic growth from resource consumption. This can be achieved through the circular economy (CE), which focuses on sustainable product life cycles. However, companies face challenges in implementing CE into their businesses. Small and medium-sized enterprises are particularly affected by these problems, as they have a limited resource base. Collaboration and social interaction between different actors can help to overcome these obstacles. Based on a self-generated sample of 1,023 German small and medium-sized enterprises, we use a questionnaire to investigate the influence of social capital and its three dimensions - structural, relational, and cognitive capital - on the implementation of CE and the mediating effect of dynamic capabilities in explaining these relationships. Using regression analyses and structural equation modeling, we find that social capital is positively associated with CE implementation and dynamic capabilities partially mediate this relationship. Interestingly, our findings suggest that not all social capital dimensions are equally important for CE implementation. We theoretically and empirically explore the network forms of social capital and extend the CE literature by suggesting that dynamic capabilities help organizations leverage social capital to drive the implementation of CE practices. The findings of this study allow us to suggest several implications for managers and institutions. From a practical perspective, our study contributes to building circular production and service capabilities in small and medium-sized enterprises. Various CE activities can transform products and services to contribute to a better and more responsible world.

Keywords: circular economy, dynamic capabilities, SMEs, social capital

Procedia PDF Downloads 77
1028 Compensation Strategies and Their Effects on Employees' Motivation and Organizational Citizenship Behaviour in Some Manufacturing Companies in Lagos, Nigeria

Authors: Ade Oyedijo

Abstract:

This paper reports the findings of a study on the strategic and organizational antecedents and effects of two opposing pay patterns used by some manufacturing companies in Lagos Nigeria with particular reference to the behavioural correlates of the pay strategies considered. The assumed relationship between pay strategies and some organizational correlates such as business and corporate strategies and firm size was considered problematic in view of their likely implications for employee motivation and citizenship behaviour and firm performance. The survey research method was used for the study. Structured, close ended questions were used to collect primary data from the respondents. A multipart Likert scale was used to measure the pay orientations of the respondent firms and the job and organizational involvement of the respondent employees. Utilizing hierarchical linear regression method and "t-test" to analyze the data obtained from 48 manufacturing companies of various sizes and strategies, it was found that the dominant pattern of employee compensation in the sampled manufacturing companies. The study also revealed that the choice of a pay strategy was strongly influenced by organizational size as well as the type of business and corporate level strategies adopted by afirm. Firms pursuing a strategy of related and unrelated diversification are more likely to adopt the algorithmic compensation system than single product firms because of their relatively larger size and scope. However; firms that pursue a competitive advantage through a business level strategy of cost efficiency are more likely to use the experiential, variable pay strategy. The study found that an algorithmic compensation strategy is as effective as experiential compensation strategy in the promotion of organizational citizenship behaviour and motivation of employees.

Keywords: compensation, corporate strategy, business strategy, motivation, citizenship behaviour, algorithmic, experiential, organizational commitment, work environment

Procedia PDF Downloads 378
1027 Social Business Evaluation in Brazil: Analysis of Entrepreneurship and Investor Practices

Authors: Erica Siqueira, Adriana Bin, Rachel Stefanuto

Abstract:

The paper aims to identify and to discuss the impact and results of ex-ante, mid-term and ex-post evaluation initiatives in Brazilian Social Enterprises from the point of view of the entrepreneurs and investors, highlighting the processes involved in these activities and their aftereffects. The study was conducted using a descriptive methodology, primarily qualitative. A multiple-case study was used, and, for that, semi-structured interviews were conducted with ten entrepreneurs in the (i) social finance, (ii) education, (iii) health, (iv) citizenship and (v) green tech fields, as well as three representatives of various impact investments, which are (i) venture capital, (ii) loan and (iii) equity interest areas. Convenience (non-probabilistic) sampling was adopted to select both businesses and investors, who voluntarily contributed to the research. The evaluation is still incipient in most of the studied business cases. Some stand out by adopting well-known methodologies like Global Impact Investing Report System (GIIRS), but still, have a lot to improve in several aspects. Most of these enterprises use nonexperimental research conducted by their own employees, which is ordinarily not understood as 'golden standard' to some authors in the area. Nevertheless, from the entrepreneur point of view, it is possible to identify that most of them including those routines in some extent in their day-by-day activities, despite the difficulty they have of the business in general. In turn, the investors do not have overall directions to establish evaluation initiatives in respective enterprises; they are funding. There is a mechanism of trust, and this is, usually, enough to prove the impact for all stakeholders. The work concludes that there is a large gap between what the literature states in regard to what should be the best practices in these businesses and what the enterprises really do. The evaluation initiatives must be included in some extension in all enterprises in order to confirm social impact that they realize. Here it is recommended the development and adoption of more flexible evaluation mechanisms that consider the complexity involved in these businesses’ routines. The reflections of the research also suggest important implications for the field of Social Enterprises, whose practices are far from what the theory preaches. It highlights the risk of the legitimacy of these enterprises that identify themselves as 'social impact', sometimes without the proper proof based on causality data. Consequently, this makes the field of social entrepreneurship fragile and susceptible to questioning, weakening the ecosystem as a whole. In this way, the top priorities of these enterprises must be handled together with the results and impact measurement activities. Likewise, it is recommended to perform further investigations that consider the trade-offs between impact versus profit. In addition, research about gender, the entrepreneur motivation to call themselves as Social Enterprises, and the possible unintended consequences from these businesses also should be investigated.

Keywords: evaluation practices, impact, results, social enterprise, social entrepreneurship ecosystem

Procedia PDF Downloads 110
1026 Identification of Damage Mechanisms in Interlock Reinforced Composites Using a Pattern Recognition Approach of Acoustic Emission Data

Authors: M. Kharrat, G. Moreau, Z. Aboura

Abstract:

The latest advances in the weaving industry, combined with increasingly sophisticated means of materials processing, have made it possible to produce complex 3D composite structures. Mainly used in aeronautics, composite materials with 3D architecture offer better mechanical properties than 2D reinforced composites. Nevertheless, these materials require a good understanding of their behavior. Because of the complexity of such materials, the damage mechanisms are multiple, and the scenario of their appearance and evolution depends on the nature of the exerted solicitations. The AE technique is a well-established tool for discriminating between the damage mechanisms. Suitable sensors are used during the mechanical test to monitor the structural health of the material. Relevant AE-features are then extracted from the recorded signals, followed by a data analysis using pattern recognition techniques. In order to better understand the damage scenarios of interlock composite materials, a multi-instrumentation was set-up in this work for tracking damage initiation and development, especially in the vicinity of the first significant damage, called macro-damage. The deployed instrumentation includes video-microscopy, Digital Image Correlation, Acoustic Emission (AE) and micro-tomography. In this study, a multi-variable AE data analysis approach was developed for the discrimination between the different signal classes representing the different emission sources during testing. An unsupervised classification technique was adopted to perform AE data clustering without a priori knowledge. The multi-instrumentation and the clustered data served to label the different signal families and to build a learning database. This latter is useful to construct a supervised classifier that can be used for automatic recognition of the AE signals. Several materials with different ingredients were tested under various solicitations in order to feed and enrich the learning database. The methodology presented in this work was useful to refine the damage threshold for the new generation materials. The damage mechanisms around this threshold were highlighted. The obtained signal classes were assigned to the different mechanisms. The isolation of a 'noise' class makes it possible to discriminate between the signals emitted by damages without resorting to spatial filtering or increasing the AE detection threshold. The approach was validated on different material configurations. For the same material and the same type of solicitation, the identified classes are reproducible and little disturbed. The supervised classifier constructed based on the learning database was able to predict the labels of the classified signals.

Keywords: acoustic emission, classifier, damage mechanisms, first damage threshold, interlock composite materials, pattern recognition

Procedia PDF Downloads 151
1025 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 29
1024 URM Infill in-Plane and out-of-Plane Interaction in Damage Evaluation of RC Frames

Authors: F. Longo, G. Granello, G. Tecchio, F. Da Porto

Abstract:

Unreinforced masonry (URM) infill walls are widely used throughout the world, also in seismic prone regions, as partitions in reinforced concrete building frames. Even if they do not represent structural elements, they can dramatically affect both strength and stiffness of RC structures by acting as a diagonal strut, modifying shear and displacements distribution along the building height, with uncertain consequences on structural safety. In the last decades, many refined models have been developed to describe infill walls effect on frame structural behaviour, but generally restricted to in-plane actions. Only very recently some new approaches were implemented to consider in-plane/out-of-plane interaction of URM infill walls in progressive collapse simulations. In the present work, a particularly promising macro-model was adopted for the progressive collapse analysis of infilled RC frames. The model allows to consider the bi-directional interaction in terms of displacement and strength capacity for URM infills, and to remove the infill contribution when the URM wall is supposed to fail during the analysis process. The model was calibrated on experimental data regarding two different URM panels thickness, modelling with particular care the post-critic softening branch. A frame specimen set representing the most common Italian structures was built considering two main normative approaches: a traditional design philosophy, corresponding to structures erected between 50’s-80’s basically designed to support vertical loads, and a seismic design philosophy, corresponding to current criteria that take into account horizontal actions. Non-Linear Static analyses were carried out on the specimen set and some preliminary evaluations were drawn in terms of different performance exhibited by the RC frame when the contemporary effect of the out-of-plane damage is considered for the URM infill.

Keywords: infill Panels macromodels, in plane-out of plane interaction, RC frames, URM infills

Procedia PDF Downloads 508
1023 All-Optical Gamma-Rays and Positrons Source by Ultra-Intense Laser Irradiating an Al Cone

Authors: T. P. Yu, J. J. Liu, X. L. Zhu, Y. Yin, W. Q. Wang, J. M. Ouyang, F. Q. Shao

Abstract:

A strong electromagnetic field with E>1015V/m can be supplied by an intense laser such as ELI and HiPER in the near future. Exposing in such a strong laser field, laser-matter interaction enters into the near quantum electrodynamics (QED) regime and highly non-linear physics may occur during the laser-matter interaction. Recently, the multi-photon Breit-Wheeler (BW) process attracts increasing attention because it is capable to produce abundant positrons and it enhances the positron generation efficiency significantly. Here, we propose an all-optical scheme for bright gamma rays and dense positrons generation by irradiating a 1022 W/cm2 laser pulse onto an Al cone filled with near-critical-density plasmas. Two-dimensional (2D) QED particle-in-cell (PIC) simulations show that, the radiation damping force becomes large enough to compensate for the Lorentz force in the cone, causing radiation-reaction trapping of a dense electron bunch in the laser field. The trapped electrons oscillate in the laser electric field and emits high-energy gamma photons in two ways: (1) nonlinear Compton scattering due to the oscillation of electrons in the laser fields, and (2) Compton backwardscattering resulting from the bunch colliding with the reflected laser by the cone tip. The multi-photon Breit-Wheeler process is thus initiated and abundant electron-positron pairs are generated with a positron density ~1027m-3. The scheme is finally demonstrated by full 3D PIC simulations, which indicate the positron flux is up to 109. This compact gamma ray and positron source may have promising applications in future.

Keywords: BW process, electron-positron pairs, gamma rays emission, ultra-intense laser

Procedia PDF Downloads 253
1022 HIV/AIDS Family Dysfunction Trajectories, Child Abuse and Psychosocial Problems among Adolescents

Authors: Paul Narh Doku

Abstract:

The relationship between parental HIV/AIDS status or death and child mental health is well known, although the role of child maltreatment as a confounder or mediator in this relationship remains uncertain. This study examined the potential path mechanism through child maltreatment mediating the link between HIV/AIDS family dysfunction trajectories and psychosocial problems. A cross-sectional survey was conducted in the Lower Manya Municipal Assembly of Ghana. A questionnaire which consisted of the Strengths and Difficulties Questionnaire (SDQ), Social and Health Assessment (SAHA), Rosenberg Self-Esteem Scale (RSES), and the Conflict Tactics Scale (CTS) was completed by 291 adolescents. Controlling for relevant sociodemographic confounders, mediation analyses using linear regression were fitted to examine whether the association between family dysfunction and psychosocial problems is mediated by child maltreatment. The results indicate that, among adolescents, child maltreatment fully mediated the association between being orphaned by AIDS and self-esteem, delinquency and risky behaviours, and peer problems. Similarly, child maltreatment fully mediated the association between living with an HIV/AIDS-infected parent and self-esteem, delinquency and risky behaviours, depression/emotional problems, and peer problems. Partial mediation was found for hyperactivity. Child maltreatment mediates the association between the family dysfunction trajectories of parental HIV/AIDS or death and psychosocial problems among adolescents. This implies that efforts to address child maltreatment among families affected by HIV/AIDS may be helpful in the prevention of psychosocial problems among these children, thus enhancing their well-being. The findings, therefore, underscore the need for comprehensive psychosocial interventions that address both the unique negative exposures of HIV/AIDS and maltreatment for children affected by HIV.

Keywords: child maltreatment, child abuse, mental health, psychosocial problems, domestic violence, HIV/AIDS, adolescents

Procedia PDF Downloads 73
1021 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions

Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden

Abstract:

Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.

Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety

Procedia PDF Downloads 58
1020 Analysis of Factors Influencing the Response Time of an Aspirating Gaseous Agent Concentration Detection Method

Authors: Yu Guan, Song Lu, Wei Yuan, Heping Zhang

Abstract:

Gas fire extinguishing system is widely used due to its cleanliness and efficiency, and since its spray will be affected by many factors such as convection and obstacles in jetting region, so in order to evaluate its effectiveness, detecting concentration distribution in the jetting area is indispensable, which is commonly achieved by aspirating concentration detection technique. During the concentration measurement, the response time of detector is a very important parameter, especially for those fire-extinguishing systems with rapid gas dispersion. Long response time will not only underestimate its concentration but also prolong the change of concentration with time. Therefore it is necessary to analyze the factors influencing the response time. In the paper, an aspirating concentration detection method was introduced, which is achieved by using a small critical nozzle and a laminar flowmeter, and because of the response time is mainly related to the gas transport process from sampling site to the sensor, the effects of exhaust pipe size, gas flow rate, and gas concentration on its response time were analyzed. During the research, Bromotrifluoromethane (CBrF₃) was used. The effect of the sampling tube was investigated with different length of 1, 2, 3, 4 and 5 m (5mm in pipe diameter) and different pipe diameter of 3, 4, 5, 6 and 8 mm (3m in length). The effect of gas flow rate was analyzed by changing the throat diameter of the critical nozzle with 0.5, 0.682, 0.75, 0.8, 0.84 and 0.88 mm. The effect of gas concentration on response time was studied with the concentration range of 0-25%. The result showed that the response time increased with the increase of both the length and diameter of the sampling pipe, and the effect of length on response time was linear, but for the effect of diameter, it was exponential. It was also found that as the throat diameter of critical nozzle increased, the response time reduced a lot, in other words, gas flow rate has a great influence on response time. For the effect of gas concentration, the response time increased with the increase of the CBrF₃ concentration, and the slope of the curve was reduced.

Keywords: aspirating concentration detection, fire extinguishing, gaseous agent, response time

Procedia PDF Downloads 263
1019 Applying GIS Geographic Weighted Regression Analysis to Assess Local Factors Impeding Smallholder Farmers from Participating in Agribusiness Markets: A Case Study of Vihiga County, Western Kenya

Authors: Mwehe Mathenge, Ben G. J. S. Sonneveld, Jacqueline E. W. Broerse

Abstract:

Smallholder farmers are important drivers of agriculture productivity, food security, and poverty reduction in Sub-Saharan Africa. However, they are faced with myriad challenges in their efforts at participating in agribusiness markets. How the geographic explicit factors existing at the local level interact to impede smallholder farmers' decision to participates (or not) in agribusiness markets is not well understood. Deconstructing the spatial complexity of the local environment could provide a deeper insight into how geographically explicit determinants promote or impede resource-poor smallholder farmers from participating in agribusiness. This paper’s objective was to identify, map, and analyze local spatial autocorrelation in factors that impede poor smallholders from participating in agribusiness markets. Data were collected using geocoded researcher-administered survey questionnaires from 392 households in Western Kenya. Three spatial statistics methods in geographic information system (GIS) were used to analyze data -Global Moran’s I, Cluster and Outliers Analysis (Anselin Local Moran’s I), and geographically weighted regression. The results of Global Moran’s I reveal the presence of spatial patterns in the dataset that was not caused by spatial randomness of data. Subsequently, Anselin Local Moran’s I result identified spatially and statistically significant local spatial clustering (hot spots and cold spots) in factors hindering smallholder participation. Finally, the geographically weighted regression results unearthed those specific geographic explicit factors impeding market participation in the study area. The results confirm that geographically explicit factors are indispensable in influencing the smallholder farming decisions, and policymakers should take cognizance of them. Additionally, this research demonstrated how geospatial explicit analysis conducted at the local level, using geographically disaggregated data, could help in identifying households and localities where the most impoverished and resource-poor smallholder households reside. In designing spatially targeted interventions, policymakers could benefit from geospatial analysis methods in understanding complex geographic factors and processes that interact to influence smallholder farmers' decision-making processes and choices.

Keywords: agribusiness markets, GIS, smallholder farmers, spatial statistics, disaggregated spatial data

Procedia PDF Downloads 134
1018 Electrical Machine Winding Temperature Estimation Using Stateful Long Short-Term Memory Networks (LSTM) and Truncated Backpropagation Through Time (TBPTT)

Authors: Yujiang Wu

Abstract:

As electrical machine (e-machine) power density re-querulents become more stringent in vehicle electrification, mounting a temperature sensor for e-machine stator windings becomes increasingly difficult. This can lead to higher manufacturing costs, complicated harnesses, and reduced reliability. In this paper, we propose a deep-learning method for predicting electric machine winding temperature, which can either replace the sensor entirely or serve as a backup to the existing sensor. We compare the performance of our method, the stateful long short-term memory networks (LSTM) with truncated backpropagation through time (TBTT), with that of linear regression, as well as stateless LSTM with/without residual connection. Our results demonstrate the strength of combining stateful LSTM and TBTT in tackling nonlinear time series prediction problems with long sequence lengths. Additionally, in industrial applications, high-temperature region prediction accuracy is more important because winding temperature sensing is typically used for derating machine power when the temperature is high. To evaluate the performance of our algorithm, we developed a temperature-stratified MSE. We propose a simple but effective data preprocessing trick to improve the high-temperature region prediction accuracy. Our experimental results demonstrate the effectiveness of our proposed method in accurately predicting winding temperature, particularly in high-temperature regions, while also reducing manufacturing costs and improving reliability.

Keywords: deep learning, electrical machine, functional safety, long short-term memory networks (LSTM), thermal management, time series prediction

Procedia PDF Downloads 86
1017 Development of Environmentally Clean Construction Materials Using Industrial Waste from Kazakhstan

Authors: Galiya Zhanzakovna Alzhanova, Yelaman Kanatovich Aibuldinov, Zhanar Baktybaevna Iskakova, Gaziz Galymovich Abdiyussupov, Madi Toktasynuly Omirzak, Aizhan Doldashevna Gazizova

Abstract:

The sustainable use of industrial waste has recently increased due to increased environmental problems in landfills. One of the best ways to utilise waste is as a road base material. Industrial waste is a less costly and more efficient way to strengthen local soils than by introducing new additive materials. This study explored the feasibility of utilising red mud, blast furnace slag, and lime production waste to develop environmentally friendly construction materials for stabilising natural loam. Four different ratios of red mud (20, 30, and 40%), blast furnace slag (25, 30, and 35%), lime production waste (4, 6, and 8%), and varied amounts of natural loam were combined to produce nine different mixtures. The results showed that the sample with 40% red mud, 35% blast furnace slag, and 8% lime production waste had the highest strength. The sample's measured compressive strength for 90 days was 7.38 MPa, its water resistance for the same period was 7.12 MPa, and its frost resistance for the same period was 7.35 MP; low linear expansion met the requirements of the Kazakh regulations for first-class building materials. The study of mineral composition showed that there was no contamination with heavy metals or dangerous substances. Road base materials made of red mud, blast furnace slag, lime production waste, and natural loam mix can be employed because of their durability and environmental performance. The chemical and mineral composition of raw materials was determined using X-ray diffraction, X-ray fluorescence, scanning electron microscopy, energy dispersive spectroscopy, atomic absorption spectroscopy, and axial compressive strength were examined.

Keywords: blast furnace slag, lime production waste, natural loam stabilizing, red mud, road base material

Procedia PDF Downloads 97
1016 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 105
1015 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review

Authors: Qiyao Han, Xianhai Meng

Abstract:

Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.

Keywords: fractal, urban infrastructure, sustainability, system-level resilience

Procedia PDF Downloads 264
1014 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 279
1013 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 121
1012 Sensitive Electrochemical Sensor for Simultaneous Detection of Endocrine Disruptors, Bisphenol A and 4- Nitrophenol Using La₂Cu₂O₅ Modified Glassy Carbon Electrode

Authors: S. B. Mayil Vealan, C. Sekar

Abstract:

Bisphenol A (BIS A) and 4 Nitrophenol (4N) are the most prevalent environmental endocrine-disrupting chemicals which mimic hormones and have a direct relationship to the development and growth of animal and human reproductive systems. Moreover, intensive exposure to the compound is related to prostate and breast cancer, infertility, obesity, and diabetes. Hence, accurate and reliable determination techniques are crucial for preventing human exposure to these harmful chemicals. Lanthanum Copper Oxide (La₂Cu₂O₅) nanoparticles were synthesized and investigated through various techniques such as scanning electron microscopy, high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, and electrochemical impedance spectroscopy. Cyclic voltammetry and square wave voltammetry techniques are employed to evaluate the electrochemical behavior of as-synthesized samples toward the electrochemical detection of Bisphenol A and 4-Nitrophenol. Under the optimal conditions, the oxidation current increased linearly with increasing the concentration of BIS A and 4-N in the range of 0.01 to 600 μM with a detection limit of 2.44 nM and 3.8 nM. These are the lowest limits of detection and the widest linear ranges in the literature for this determination. The method was applied to the simultaneous determination of BIS A and 4-N in real samples (food packing materials and river water) with excellent recovery values ranging from 95% to 99%. Better stability, sensitivity, selectivity and reproducibility, fast response, and ease of preparation made the sensor well-suitable for the simultaneous determination of bisphenol and 4 Nitrophenol. To the best of our knowledge, this is the first report in which La₂Cu₂O₅ nano particles were used as efficient electron mediators for the fabrication of endocrine disruptor (BIS A and 4N) chemical sensors.

Keywords: endocrine disruptors, electrochemical sensor, Food contacting materials, lanthanum cuprates, nanomaterials

Procedia PDF Downloads 77
1011 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 135
1010 Production, Extraction and Purification of Fungal Chitosan and Its Modification for Medical Applications

Authors: Debajyoti Bose

Abstract:

Chitosan has received much attention as a functional biopolymer for diverse applications, especially in pharmaceutics and medicine. Chitosan is a positively charged natural biodegradable and biocompatible polymer. It is a linear polysaccharide consisting of β-1,4 linked monomers of glucosamine and N-acetylglucosamine. Chitosan can be mainly obtained from fungal sources during large fermentation process. In this study,three different fungal strains Aspergillus niger NCIM 1045, Aspergillus oryzae NCIM 645 and Mucor indicus MTCC 3318 were used for the production of chitosan. The growth mediums were optimized for maximum fungal production. The produced chitosan was characterized by determining degree of deacetylation. Chitosan possesses one reactive amino at the C-2 position of the glucosamine residue, and these amines confer important functional properties to chitosan which can be exploited for biofabrication to generate various chemically modified derivatives and explore their potential for pharmaceutical field. Chitosan nanoparticles were prepared by ionic cross-linking with tripolyphosphate (TPP). The major effect on encapsulation and release of protein (e.g. enzyme diastase) in chitosan-TPP nanoparticles was investigated in order to control the loading and release efficiency. It was noted that the chitosan loading and releasing efficiency as a nanocapsule, obtained from different fungal sources was almost near to initial enzyme activity(12026 U/ml) with a negligible loss. This signify, chitosan can be used as a polymeric drug as well as active component or protein carrier material in dosage by design due to its appealing properties such as biocompatibility, biodegradability, low toxicity and relatively low production cost from abundant natural sources. Based upon these initial experiments, studies were also carried out on modification of chitosan based nanocapsules incorporated with physiologically important enzymes and nutraceuticals for target delivery.

Keywords: fungi, chitosan, enzyme, nanocapsule

Procedia PDF Downloads 490
1009 The Effect of Artificial Intelligence on Banking Development and Progress

Authors: Mina Malak Hanna Saad

Abstract:

New strategies for supplying banking services to the customer have been brought, which include online banking. Banks have begun to recall electronic banking (e-banking) as a manner to replace some conventional department features by means of the usage of the internet as a brand-new distribution channel. A few clients have at least one account at multiple banks and get admission to those debts through online banking. To test their present-day internet worth, customers need to log into each of their debts, get particular statistics, and paint closer to consolidation. Not only is it time-ingesting; however, but it is also a repeatable activity with a certain frequency. To solve this problem, the idea of account aggregation was delivered as a solution. Account consolidation in e-banking as a form of digital banking appears to build stronger dating with clients. An account linking service is usually known as a service that permits customers to manipulate their bank accounts held at exceptional institutions through a common online banking platform that places a high priority on safety and statistics protection. The object affords an outline of the account aggregation approach in e-banking as a distinct carrier in the area of e-banking. The advanced facts generation is becoming a vital thing in the improvement of financial services enterprise, specifically the banking enterprise. It has brought different ways of delivering banking to the purchaser, which includes net Banking. Banks began to study electronic banking (e-banking) as a means to update some of their traditional branch functions and the use of the net as a distribution channel. Some clients have at least multiple accounts throughout banks and get the right of entry to that money owed through the usage of e-banking offerings. To examine the contemporary internet's well-worth position, customers have to log in to each of their money owed, get the information and work on consolidation. This no longer takes sufficient time; however, it is a repetitive interest at a specified frequency. To address this point, an account aggregation idea is brought as an answer. E-banking account aggregation, as one of the e-banking kinds, appeared to construct a more potent dating with clients. Account Aggregation carrier usually refers to a service that allows clients to control their bank bills maintained in one-of-a-kind institutions via a common Internet banking working platform, with an excessive subject to protection and privateness. This paper offers an overview of an e-banking account aggregation technique as a new provider in the e-banking field.

Keywords: compatibility, complexity, mobile banking, observation, risk banking technology, Internet banks, modernization of banks, banks, account aggregation, security, enterprise developmente-banking, enterprise development

Procedia PDF Downloads 5