Search results for: bilingual advantage
473 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications
Authors: Sung-Won Seo, Jang-Young Choi
Abstract:
The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method
Procedia PDF Downloads 300472 Changing Patterns of Marriage and Sexual Relations among Young Single Female Workers in Garment Factories in Gazipur, Bangladesh
Authors: Runa Laila
Abstract:
In Bangladesh, migration and employment opportunities in the ready-made garment factories presented an alternative to early and arranged-marriage to many young women from the countryside. Although the positive impact of young women’s labour migration and employment in the garment industry on economic independence, increased negotiation power, and enhancement of self-esteem have been well documented, impact of employment on sexual norms and practices remained under-researched. This ethnographic study comprising of an in-depth interview of 21 single young women working in various garment factories in Gazipur, Dhaka, explores the implication of work on sexual norms and practices. This study found young single garment workers experience a range of consensual and coercive sexual relations. The mixed-sex work environment in the garment manufacturing industry and private housing arrangements provide young single women opportunities to develop romantic and sexual relationships in the transient urban space, which was more restricted in the rural areas. The use of mobile phones further aids lovers to meet in amusement parks, friends’ houses, or residential hotels beyond the gaze of colleagues and neighbors. Due to sexual double standard, men’s sexual advantage is seen as natural and accepted, while women are being blamed as immoral for being engaged in pre-marital sex. Although self-choice marriage and premarital relations reported to be common among garment workers, stigma related to premarital sex lead young single women to resort to secret abortion practices. Married men also use power position to lure women in a subordinate position in coerce sexual relations, putting their reproductive and psychological health at risk. To improve sexual and reproductive health and wellbeing of young female garment workers, it is important to understand these changing sexual practices which otherwise remain taboo in public health discourses.Keywords: female migration, ready-made garment, reproductive health, sexual practice
Procedia PDF Downloads 186471 PhenoScreen: Development of a Systems Biology Tool for Decision Making in Recurrent Urinary Tract Infections
Authors: Jonathan Josephs-Spaulding, Hannah Rettig, Simon Graspeunter, Jan Rupp, Christoph Kaleta
Abstract:
Background: Recurrent urinary tract infections (rUTIs) are a global cause of emergency room visits and represent a significant burden for public health systems. Therefore, metatranscriptomic approaches to investigate metabolic exchange and crosstalk between uropathogenic Escherichia coli (UPEC), which is responsible for 90% of UTIs, and collaborating pathogens of the urogenital microbiome is necessary to better understand the pathogenetic processes underlying rUTIs. Objectives: This study aims to determine the level in which uropathogens optimize the host urinary metabolic environment to succeed during invasion. By developing patient-specific metabolic models of infection, these observations can be taken advantage of for the precision treatment of human disease. Methods: To date, we have set up an rUTI patient cohort and observed various urine-associated pathogens. From this cohort, we developed patient-specific metabolic models to predict bladder microbiome metabolism during rUTIs. This was done by creating an in silico metabolomic urine environment, which is representative of human urine. Metabolic models of uptake and cross-feeding of rUTI pathogens were created from genomes in relation to the artificial urine environment. Finally, microbial interactions were constrained by metatranscriptomics to indicate patient-specific metabolic requirements of pathogenic communities. Results: Metabolite uptake and cross-feeding are essential for strain growth; therefore, we plan to design patient-specific treatments by adjusting urinary metabolites through nutritional regimens to counteract uropathogens by depleting essential growth metabolites. These methods will provide mechanistic insights into the metabolic components of rUTI pathogenesis to provide an evidence-based tool for infection treatment.Keywords: recurrent urinary tract infections, human microbiome, uropathogenic Escherichia coli, UPEC, microbial ecology
Procedia PDF Downloads 134470 Experimental Investigation of Nano-Enhanced-PCM-Based Heat Sinks for Passive Thermal Management of Small Satellites
Authors: Billy Moore, Izaiah Smith, Dominic Mckinney, Andrew Cisco, Mehdi Kabir
Abstract:
Phase-change materials (PCMs) are considered one of the most promising substances to be engaged passively in thermal management and storage systems for spacecraft, where it is critical to diminish the overall mass of the onboard thermal storage system while minimizing temperature fluctuations upon drastic changes in the environmental temperature within the orbit stage. This makes the development of effective thermal management systems more challenging since there is no atmosphere in outer space to take advantage of natural and forced convective heat transfer. PCM can store or release a tremendous amount of thermal energy within a small volume in the form of latent heat of fusion in the phase-change processes of melting and solidification from solid to liquid or, conversely, during which temperature remains almost constant. However, the existing PCMs pose very low thermal conductivity, leading to an undesirable increase in total thermal resistance and, consequently, a slow thermal response time. This often turns into a system bottleneck from the thermal performance perspective. To address the above-mentioned drawback, the present study aims to design and develop various heat sinks featured by nano-structured graphitic foams (i.e., carbon foam), expanded graphite (EG), and open-cell copper foam (OCCF) infiltrated with a conventional paraffin wax PCM with a melting temperature of around 35 °C. This study focuses on the use of passive thermal management techniques to develop efficient heat sinks to maintain the electronics circuits’ and battery module’s temperature within the thermal safety limit for small spacecraft and satellites such as the Pumpkin and OPTIMUS battery modules designed for CubeSats with a cross-sectional area of approximately 4˝×4˝. Thermal response times for various heat sinks are assessed in a vacuum chamber to simulate space conditions.Keywords: heat sink, porous foams, phase-change material (PCM), spacecraft thermal management
Procedia PDF Downloads 12469 Hermitical Landscapes: The Congregation of Saint Paul of Serra De Ossa
Authors: Rolando Volzone
Abstract:
The Congregation of Saint Paul of Serra de Ossa (Ossa Mountain) was founded in 1482, originated by the eremitic movement of the homens da pobre vida (poor life men), which is documented since 1366. The community of hermits expanded up to the first half of the 15th century, mostly in southern Portugal in the Alentejo region. In 1578, following a process of institutionalization led by the Church, an autonomous congregation was set up, affiliated in the Hungarian Order of Saint Paul the First Hermit, until 1834, when the decree of dissolution of the religious orders disbanded all the convents and monasteries in Portugal. The architectural evidences that reached our days as a legacy of the hermitical movement in Serra de Ossa, although studied and analysed from an historical point of view, are still little known with respect to the architectural characteristics of its physical implantation and its relationship with the natural systems. This research intends to expose the appropriation process of the locus eremus as a starting point for the interpretation of this landscape, evidencing the close relationship between the religious experience and the physical space chosen to reach the perfection of the soul. The locus eremus is thus determined not only by practical aspects such as the absolute and relative location, orography, existence of water resources, or the King’s favoring to the religious and settlement action of the hermits, but also by spiritual aspects related to the symbolism of the physical elements present and the solitary walk of these men. These aspects, combined with the built architectural elements and other exerted human action, may be fertile ground for the definition of a hypothetical hermitical landscape based on the sufficiently distinctive characteristics that sustain it. The landscape built by these hermits is established as a cultural and material heritage, and its preservation is of utmost importance. They deeply understood this place and took advantage of its natural resources, manipulating them in an ecological and economically sustainable way, respecting the place, without overcoming its own genius loci but becoming part of it.Keywords: architecture, congregation of Saint Paul of Serra de Ossa, heremitical landscape, locus eremus
Procedia PDF Downloads 233468 The Influence of Characteristics of Waste Water on Properties of Sewage Sludge
Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu, Catalina Topa
Abstract:
In the field of environmental protection in the EU and also in Romania, strict and clear rules are imposed that are respected. Among those, mandatory municipal wastewater treatment is included. Our study involved Municipal Wastewater Treatment Plant (MWWTP) of Galati. MWWTP began its activity by the end of 2011 and technology is one of the most modern used in the EU. Moreover, to our knowledge, it is the first technology of this kind used in the region. Until commissioning, municipal wastewater was discharged directly into the Danube without any treatment. Besides the benefits of depollution, a new problem has arisen: the accumulation of increasingly large sewage sludge. Therefore, it is extremely important to find economically feasible and environmentally friendly solutions. One of the most feasible methods of disposing of sewage sludge is their use on agricultural land. Sewage sludge can be used in agriculture if monitored in terms of physicochemical properties (pH, nutrients, heavy metals, etc.), in order not to contribute to pollution in soils and not to affect chemical and biological balances, which are relatively fragile. In this paper, 16 physico-chemical parameters were monitored. Experimental testings were realised on waste water samples, sewage sludge results and treated water samples. Testing was conducted with electrochemichal methods (pH, conductivity, TDS); parameters N-total (mg/L), P-total (mg/L), N-NH4 (mg/L), N-NO2 (mg/L), N-NO3 (mg/L), Fe-total (mg/L), Cr-total (mg/L), Cu (mg/L), Zn (mg/L), Cd (mg/L), Pb (mg/L), Ni (mg/L) were determined by spectrophotometric methods using a spectrophotometer NOVA 60 and specific kits. Analyzing the results, we concluded that Sewage sludges, although containing heavy metals, are in small quantities and will not affect the land on which they will be deposited. Also, the amount of nutrients contained are appreciable. These features indicate that the sludge can be safely used in agriculture, with the advantage that they represent a cheap fertilizer. Acknowledgement: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation – UEFISCDI, PNCDI III project, 79BG/2017, Efficiency of the technological process for obtaining of sewage sludge usable in agriculture, Efficient.Keywords: municipal wastewater, physico-chemical properties, sewage sludge, technology
Procedia PDF Downloads 209467 Primary Study of the Impact of the Riverfront Urban Transformations Inside Egyptian Cities in Future Urban Design Process: Case Study of North Asyut City
Authors: Islam Abouelhamd
Abstract:
Rives have long been recognized as one of the most important natural resources, They are important to ensure human health, civilization, and sustainable development, and the importance of rivers as the focal point of cities was established from the early times of civilization and will remain so. Urban design of Riverfront has been an issue of wide concern and extensive discussion since the 1970s, however, Cities seek a riverfront that is a place of public enjoyment, They want a Riverfront where there is ample visual and physical public access to both the water and the land, they want a place that contributes to the quality of life in all of its aspects; economic, social, and cultural, on another hand, Successful urban design of Riverfront requires an understanding of development processes, dimensions of urban design and an appreciation of the distinctiveness of Riverfront locations. A close association between cities and river is inherently over the history of civilization, and in fact, many urban cities in Egypt are located close to Nile River areas. Always trying to use the land closer to the river to take advantage of the benefits it provides, And in spite of the significant role played by the littoral fronts in the life of the city, the riverfronts have remained generally in Egypt and especially in Asyut city neglected. According to the knowledge gained from the literature review, review of case studies and the historical researches of Asyut Riverfront, this research aims to identify the urban transformations of Asyut riverfront and expect the Opportunities and Challenges which will play an important part of the future urban design issues and researches will prepare, especially in the case study area (northern areas of Asyut riverfront). After that, the case study data, historical framework and International experiences were collected and analyzed to Produce Primary indicators of the expectations of the riverfront urban design process inside the case study area, In addition to preparing the conclusions of the theoretical framework and recommendations for the paper.Keywords: civilization, sustainable development, riverfront, urban transformations
Procedia PDF Downloads 176466 Alternation of Executive Power and Democratic Governance in Nigeria: The Role of Independent National Electoral Commission, 1999-2014
Authors: J. Tochukwu Omenma
Abstract:
Buzzword in Nigeria is that democracy has “come to stay”. Politicians in their usual euphoria consider democracy as already consolidated in the country. Politicians linked this assumption to three fundamental indicators – (a) multiparty system; (b) regular elections and (c) absence of military coup after 15 years of democracy in Nigeria. Beyond this assumption, we intend to empirically verify these claims and assumptions, by relying on Huntington’s conceptualization of democratic consolidation. Though, Huntington asserts that multipartism, regular elections and absence of any major obstacle leading to reversal of democracy are significant indicators of democratic consolidation, but the presence of those indicators must result to alternation of executive power for democratic consolidation to occur. In other words, regular conduct of election and existence of multiple political parties are not enough for democratic consolidation, rather free and fair elections. Past elections were characterized of massive fraud and irregularities casting doubts on integrity of electoral management body (EMB) to conduct free and fair elections in Nigeria. There are three existing perspectives that have offered responses to the emasculation of independence of EMB. One is a more popular position indicating that the incumbent party, more than the opposition party, influence the EMB activities with the aim of rigging elections; the other is a more radical perspective that suggests that weakening of EMB power is more associated with the weakest party than with the incumbent; and the last, is that godfather(s) are in direct control of EMB members thereby controlling the process of electoral process to the advantage of the godfather(s). With empirical evidence sourced from the reports of independent election monitors, (European Union, Election Observation Mission in Nigeria) this paper shows at different electoral periods that, in terms of influencing election outcomes, the incumbent and godfather have been more associated with influencing election results than the opposition. The existing nature of executive power in Nigeria provides a plausible explanation for the incumbent’s overbearing influence thereby limiting opportunity for free and fair elections and by extension undermining the process of democratic consolidation in Nigeria.Keywords: political party, democracy, democratic consolidation, election, godfatherism
Procedia PDF Downloads 491465 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial
Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs
Abstract:
Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation
Procedia PDF Downloads 122464 Biocompatible Beta Titanium Alloy Ti36Nb6Ta as a Suitable Material for Bone Regeneration
Authors: Vera Lukasova, Eva Filova, Jana Dankova, Vera Sovkova, Matej Daniel, Michala Rampichova
Abstract:
Proper bone implants should promote fast adhesion of cells, stimulate cell differentiation and support the formation of bone tissue. Nowadays titanium is used as a biocompatible material capable of bone tissue integration. This study was focused on comparison of bioactive properties of two titanium alloys - beta titanium alloy Ti36Nb6Ta and standard medical titanium alloy Ti6A14V. The advantage of beta titanium alloy Ti36Nb6Ta is mainly that this material does not contain adverse elements like vanadium or aluminium. Titanium alloys were sterilized in ethanol, placed into 48 well plates and seeded with porcine mesenchymal stem cells. Cells were cultivated for 14 days in standard growth cultivation media with osteogenic supplements. Cell metabolic activity was quantified using MTS assay (Promega). Cell adhesion on day 1 and cell proliferation on further days were verified immunohistochemically using beta-actin monoclonal antibody and secondary antibody conjugated with AlexaFluor®488. Differentiation of cells was evaluated using alkaline phosphatase assay. Additionally, gene expression of collagen I was measured by qRT-PCR. Porcine mesenchymal stem cells adhered and spread well on beta titanium alloy Ti36Nb6Ta on day 1. During the 14 days’ time period the cells were spread confluently on the surface of the beta titanium alloy Ti36Nb6Ta. The metabolic activity of cells increased during the whole cultivation period. In comparison to standard medical titanium alloy Ti6A14V, we did not observe any differences. Moreover, the expression of collagen I gene revealed no statistical differences between both titanium alloys. Therefore, a beta titanium alloy Ti36Nb6Ta promotes cell adhesion, metabolic activity, proliferation and collagen I expression equally to standard medical titanium alloy Ti6A14V. Thus, beta titanium is a suitable material that provides sufficient biocompatible properties. This project was supported by the Czech Science Foundation: grant No. 16-14758S.Keywords: beta titanium alloy, biocompatibility, differentiation, mesenchymal stem cells
Procedia PDF Downloads 494463 The Effects of Leadership on the Claim of Responsibility
Authors: Katalin Kovacs
Abstract:
In most forms of violence the perpetrators intend to hide their identities. Terrorism is different. Terrorist groups often take responsibility for their attacks, and consequently they reveal their identities. This unique characteristic of terrorism has been largely overlooked, and scholars are still puzzled as to why terrorist groups claim responsibility for their attacks. Certainly, the claim of responsibility is worth analysing. It would help to have a clearer picture of what terrorist groups try to achieve and how, but also to develop an understanding of the strategic planning of terrorist attacks and the message the terrorists intend to deliver. The research aims to answer the question why terrorist groups choose to claim responsibility for some of their attacks and not for others. In order to do so the claim of responsibility is considered to be a tactical choice, based on the assumption that terrorists weigh the costs and benefits of claiming responsibility. The main argument is that terrorist groups do not claim responsibility in cases when there is no tactical advantage gained from claiming responsibility. The idea that the claim of responsibility has tactical value offers the opportunity to test these assertions using a large scale empirical analysis. The claim of responsibility as a tactical choice depends on other tactical choices, such as the choice of target, the internationality of the attack, the number of victims and whether the group occupies territory or operates as an underground group. The structure of the terrorist groups and the level of decision making also affects the claim of responsibility. Terrorists on the lower level are less disciplined than the leaders. This means that the terrorists on lower levels pay less attention to the strategic objectives and engage easier in indiscriminate violence, and consequently they would less like to claim responsibility. Therefore, the research argues that terrorists, who are on a highest level of decision making would claim responsibility for the attacks as those are who takes into account the strategic objectives. As most studies on terrorism fail to provide definitions; therefore the researches are fragmented and incomparable. Separate, isolated researches do not support comprehensive thinking. It is also very important to note that there are only a few researches using quantitative methods. The aim of the research is to develop a new and comprehensive overview of the claim of responsibility based on strong quantitative evidence. By using well-established definitions and operationalisation the current research focuses on a broad range of attributes that can have tactical values in order to determine circumstances when terrorists are more likely to claim responsibility.Keywords: claim of responsibility, leadership, tactical choice, terrorist group
Procedia PDF Downloads 313462 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 90461 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy
Authors: Stefano Barone
Abstract:
Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices
Procedia PDF Downloads 71460 Seismic Evaluation of Multi-Plastic Hinge Design Approach on RC Shear Wall-Moment Frame Systems against Near-Field Earthquakes
Authors: Mohsen Tehranizadeh, Mahboobe Forghani
Abstract:
The impact of higher modes on the seismic response of dual structural system consist of concrete moment-resisting frame and with RC shear walls is investigated against near-field earthquakes in this paper. a 20 stories reinforced concrete shear wall-special moment frame structure is designed in accordance with ASCE7 requirements and The nonlinear model of the structure was performed on OpenSees platform. Nonlinear time history dynamic analysis with 3 near-field records are performed on them. In order to further understand the structural collapse behavior in the near field, the response of the structure at the moment of collapse especially the formation of plastic hinges is explored. The results revealed that the amplification of moment at top of the wall due to higher modes, the plastic hinge can form in the upper part of wall, even when designed and detailed for plastic hinging at the base only (according to ACI code).on the other hand, shear forces in excess of capacity design values can develop due to the contribution of the higher modes of vibration to dynamic response due to the near field can cause brittle shear or sliding failure modes. The past investigation on shear walls clearly shows the dual-hinge design concept is effective at reducing the effects of the second mode of response. An advantage of the concept is that, when combined with capacity design, it can result in relaxation of special reinforcing detailing in large portions of the wall. In this study, to investigate the implications of multi-design approach, 4 models with varies arrangement of hinge plastics at the base and height of the shear wall are considered. results base on time history analysis showed that the dual or multi plastic hinges approach can be useful in order to control the high moment and shear demand of higher mode effect.Keywords: higher mode effect, Near-field earthquake, nonlinear time history analysis, multi plastic hinge design
Procedia PDF Downloads 430459 Dynamic Capabilities and Disorganization: A Conceptual Exploration
Authors: Dinuka Herath, Shelley Harrington
Abstract:
This paper prompts debate about whether disorganization can be positioned as a mechanism that facilitates the creation and enactment of important dynamic capabilities within an organization. This particular article is a conceptual exploration of the link between dynamic capabilities and disorganization and presents the case for agent-based modelling as a viable methodological tool which can be used to explore this link. Dynamic capabilities are those capabilities that an organization needs to sustain competitive advantage in complex environments. Disorganization is the process of breaking down restrictive organizational structures and routines that commonly reside in organizations in order to increase organizational performance. In the 20th century, disorganization was largely viewed as an undesirable phenomenon within an organization. However, the concept of disorganization has been revitalized and garnered research interest in the recent years due to studies which demonstrate some of the advantages of disorganization to an organization. Furthermore, recent Agent-based simulation studies have shown the capability of disorganization to be managed and argue for disorganization to be viewed as an enabler of organizational productivity. Given the natural state of disorganization and resulting fear this can create, this paper argues that instead of trying to ‘correct’ disorganization, it should be actively encouraged to have functional purpose. The study of dynamic capabilities emerged as a result of heightened dynamism and consequentially the very nature of dynamism denotes a level of fluidity and flexibility, something which this paper argues many organizations do not truly foster due to a constrained commitment to organization and order. We argue in this paper that the very state of disorganization is a state that should be encouraged to develop dynamic capabilities needed to not only deal with the complexities of the modern business environment but also to sustain competitive success. The significance of this paper stems from the fact that both dynamic capabilities and disorganization are two concepts that are gaining prominence in their respective academic genres. Despite the attention each concept has received individually, no conceptual link has been established to depict how they actually interact with each other. We argue that the link between these two concepts present a novel way of looking at organizational performance. By doing so, we explore the potential of these two concepts working in tandem in order to increase organizational productivity which has significant implications for both academics and practitioners alike.Keywords: agent-based modelling, disorganization, dynamic capabilities, performance
Procedia PDF Downloads 317458 Applications and Development of a Plug Load Management System That Automatically Identifies the Type and Location of Connected Devices
Authors: Amy Lebar, Kim L. Trenbath, Bennett Doherty, William Livingood
Abstract:
Plug and process loads (PPLs) account for 47% of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering and data storage. A laboratory proof of concept (PoC) demonstrated all but the data storage capabilities and these capabilities were validated using an office building scenario. The PoC can identify when a device is plugged into an outlet and the location of the device in the building. When a device is moved, the PoC’s dashboard and database are automatically updated with the new location. The PoC implements controls to devices from the system dashboard so that devices maintain correct schedules regardless of where they are plugged in within a building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. A system like ATLIS could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.Keywords: commercial buildings, grid-interactive efficient buildings (GEB), miscellaneous electric loads (MELs), plug loads, plug load management (PLM)
Procedia PDF Downloads 132457 Quality Assurance in Higher Education: Doha Institute for Graduate Studies as a Case Study
Authors: Ahmed Makhoukh
Abstract:
Quality assurance (QA) has recently become a common practice, which is endorsed by most Higher Education (HE) institutions worldwide, due to the pressure of internal and external forces. One of the aims of this quality movement is to make the contribution of university education to socio-economic development highly significant. This entails that graduates are currently required have a high-quality profile, i.e., to be competent and master the 21st-century skills needed in the labor market. This wave of change, mostly imposed by globalization, has the effect that university education should be learner-centered in order to satisfy the different needs of students and meet the expectations of other stakeholders. Such a shift of focus on the student learning outcomes has led HE institutions to reconsider their strategic planning, their mission, the curriculum, the pedagogical competence of the academic staff, among other elements. To ensure that the overall institutional performance is on the right way, a QA system should be established to assume this task of checking regularly the extent to which the set of standards of evaluation are strictly respected as expected. This operation of QA has the advantage of proving the accountability of the institution, gaining the trust of the public with transparency and enjoying an international recognition. This is the case of Doha Institute (DI) for Graduate Studies, in Qatar, the object of the present study. The significance of this contribution is to show that the conception of quality has changed in this digital age, and the need to integrate a department responsible for QA in every HE institution to ensure educational quality, enhance learners and achieve academic leadership. Thus, to undertake the issue of QA in DI for Graduate Studies, an elite university (in the academic sense) that focuses on a small and selected number of students, a qualitative method will be adopted in the description and analysis of the data (document analysis). In an attempt to investigate the extent to which QA is achieved in Doha Institute for Graduate Studies, three broad indicators will be evaluated (input, process and learning outcomes). This investigation will be carried out in line with the UK Quality Code for Higher Education represented by Quality Assurance Agency (QAA).Keywords: accreditation, higher education, quality, quality assurance, standards
Procedia PDF Downloads 147456 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 453455 “A Watched Pot Never Boils.” Exploring the Impact of Job Autonomy on Organizational Commitment among New Employees: A Comprehensive Study of How Empowerment and Independence Influence Workplace Loyalty and Engagement in Early Career Stages
Authors: Atnafu Ashenef Wondim
Abstract:
In today’s highly competitive business environment, employees are considered a source of competitive advantage. Researchers have looked into job autonomy's effect on organizational commitment and declared superior organizational performance strongly depends on the effort and commitment of employees. The purpose of this study was to explore the relationship between job autonomy and organizational commitment from newcomer’s point of view. The mediation role of employee engagement (physical, emotional, and cognitive) was also examined in the case of Ethiopian Commercial Banks. An exploratory survey research design with mixed-method approach that included partial least squares structural equation modeling and Fuzzy-Set Qualitative Comparative Analysis technique were using to address the sample size of 348 new employees. In-depth interviews with purposive and convenientsampling techniques are conducted with new employees (n=43). The results confirmed that job autonomy had positive, significant direct effects on physical engagement, emotional engagement, and cognitive engagement (path coeffs. = 0.874, 0.931, and 0.893).The results showed thatthe employee engagement driver, physical engagement, had a positive significant influence on affective commitment (path coeff. = 0.187) and normative commitment (path coeff. = 0.512) but no significant effect on continuance commitment. Employee engagement partially mediates the relationship between job autonomy and organizational commitment, which means supporting the indirect effects of job autonomy on affective, continuance, and normative commitment through physical engagement. The findings of this study add new perspectives by positioning it within a complex organizational African setting and by expanding the job autonomy and organizational commitment literature, which will benefit future research. Much of the literature on job autonomy and organizational commitment has been conducted within a well-established organizational business context in Western developed countries.The findings lead to fresh information on job autonomy and organizational commitment implementation enablers that can assist in the formulation of a better policy/strategy to efficiently adopt job autonomy and organizational commitment.Keywords: employee engagement, job autonomy, organizational commitment, social exchange theory
Procedia PDF Downloads 28454 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 324453 Digital Nomads: Current Context, Difficulties, and Opportunities for Costa Rica
Authors: Cristina Gutiérrez Carranza
Abstract:
Digital nomadism is a trend and lifestyle which combines work and traveling. This tourism tendency is motivated by the desire to have a fixed source of financial income while becoming independent from a specific work location. This study contextualizes Costa Rica and its potential prospects to take advantage of this tourism market niche. It explores the dynamics of digital nomadism in the context of Costa Rica, analyzing the current scenario, challenges, and opportunities related to this global phenomenon. The research covers several areas, including the conceptualization of digital nomadism, its historical background, and contemporary manifestations. The investigation delves into the present state of digital nomadism, evaluating the extent of digitalization in Costa Rica, mobile phone coverage, and fixed internet access. As part of the strategies implemented, as the study develops, mapping the most common destinations of digital nomads is a key factor, bringing a sight on the aspects that make Costa Rica an attractive location for this emerging tourist group. Additionally, the paper draws insights from hosting entrepreneurs and digital nomads with work visas in Costa Rica, offering a comprehensive understanding of the experiences and perspectives from both sides. Hence, the study includes data from a sample of 20 digital nomads holding visas for Costa Rica, offering a detailed analysis of their professional activities, experiences and needs as remote workers in the country. As well it adds in perceptions from 10 entrepreneurs engaged in providing accommodation services to digital nomads contribute to a degree of understanding of the way they have faced this growing movement. This research provides significant insights on the dynamics of digital nomadism in Costa Rica by integrating data from specific sources. Policymakers, entrepreneurs, and other stakeholders are anticipated to gain valuable data from the findings regarding the opportunities and challenges of hosting and accommodating digital nomads, which will ultimately aid in the creation of plans to capitalize on this worldwide trend for the nation's socioeconomic development.Keywords: digital nomads, tourism, sustainability, digital nomads visa, remote jobs
Procedia PDF Downloads 68452 Facile Surfactant-Assisted Green Synthesis of Stable Biogenic Gold Nanoparticles with Potential Antibacterial Activity
Authors: Sneha Singh, Abhimanyu Dev, Vinod Nigam
Abstract:
The major issue which decides the impending use of gold nanoparticles (AuNPs) in nanobiotechnological applications is their particle size and stability. Often the AuNPs obtained biomimetically are considered useless owing to their instability in the aqueous medium and thereby limiting the widespread acceptance of this facile green synthesis procedure. So, the use of nontoxic surfactants is warranted to stabilize the biogenic nanoparticles (NPs). But does the surfactant only play a role in stabilizing by being adsorbed to the NPs surface or can it have any other significant contribution in synthesis process and controlling their size as well as shape? Keeping this idea in mind, AuNPs were synthesized by using surfactant treated (lechate) and untreated (cell lysate supernatant) Bacillus licheniformis cell extract. The cell extracts mediated reduction of chloroauric acid (HAuCl 4) in the presence of non-ionic surfactant, Tween 20 (TW20), and its effect on the AuNPs stability was studied. Interestingly, the surfactant used in the study served as potential alternative to harvest cellular enzymes involved in bioreduction process in a hassle free condition. The surfactants ability to solubilize/leach membrane proteins and simultaneously stabilizing the AuNPs could have advantage from process point of view as it will reduce the time and economics involve in the nanofabrication of biogenic NPs. The synthesis was substantiated with UV-Vis spectroscopy, Dynamic light scattering study, FTIR spectroscopy, and Transmission electron microscopy. The Zeta potential of AuNPs solutions was measured routinely to corroborate the stability observations recorded visually. Highly stable, ultra-small AuNPs of 2.6 nm size were obtained from the study. Further, the biological efficacy of the obtained AuNPs as potential antibacterial agent was evaluated against Bacilllus subtilis, Pseudomonas aeruginosa, and Escherichia coli by observing the zone of inhibition. This potential of AuNPs of size < 3 nm as antibacterial agent could pave way for development of new antimicrobials and overcoming the problems of antibiotics resistanceKeywords: antibacterial, bioreduction, nanoparticles, surfactant
Procedia PDF Downloads 236451 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding
Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang
Abstract:
As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis
Procedia PDF Downloads 35450 The Analysis of Drill Bit Optimization by the Application of New Electric Impulse Technology in Shallow Water Absheron Peninsula
Authors: Ayshan Gurbanova
Abstract:
Despite based on the fact that drill bit which is the smallest part of bottom hole assembly costs only in between 10% and 15% of the total expenses made, they are the first equipment that is in contact with the formation itself. Hence, it is consequential to choose the appropriate type and dimension of drilling bit, which will prevent majority of problems by not demanding many tripping procedure. However, within the advance in technology, it is now seamless to be beneficial in the terms of many concepts such as subsequent time of operation, energy, expenditure, power and so forth. With the intention of applying the method to Azerbaijan, the field of Shallow Water Absheron Peninsula has been suggested, where the mainland has been located 15 km away from the wildcat wells, named as “NKX01”. It has the water depth of 22 m as indicated. In 2015 and 2016, the seismic survey analysis of 2D and 3D have been conducted in contract area as well as onshore shallow water depth locations. With the aim of indicating clear elucidation, soil stability, possible submersible dangerous scenarios, geohazards and bathymetry surveys have been carried out as well. Within the seismic analysis results, the exact location of exploration wells have been determined and along with this, the correct measurement decisions have been made to divide the land into three productive zones. In the term of the method, Electric Impulse Technology (EIT) is based on discharge energies of electricity within the corrosivity in rock. Take it simply, the highest value of voltages could be created in the less range of nano time, where it is sent to the rock through electrodes’ baring as demonstrated below. These electrodes- higher voltage powered and grounded are placed on the formation which could be obscured in liquid. With the design, it is more seamless to drill horizontal well based on the advantage of loose contact of formation. There is also no chance of worn ability as there are no combustion, mechanical power exist. In the case of energy, the usage of conventional drilling accounts for 1000 𝐽/𝑐𝑚3 , where this value accounts for between 100 and 200 𝐽/𝑐𝑚3 in EIT. Last but not the least, from the test analysis, it has been yielded that it achieves the value of ROP more than 2 𝑚/ℎ𝑟 throughout 15 days. Taking everything into consideration, it is such a fact that with the comparison of data analysis, this method is highly applicable to the fields of Azerbaijan.Keywords: drilling, drill bit cost, efficiency, cost
Procedia PDF Downloads 73449 International Entrepreneurial Orientation and Institutionalism: The Effect on International Performance for Latin American SMEs
Authors: William Castillo, Hugo Viza, Arturo Vargas
Abstract:
The Pacific Alliance is a trade bloc that is composed of four emerging economies: Chile, Colombia, Peru, and Mexico. These economies have gained macroeconomic stability in the past decade and as a consequence present future economic progress. Under this positive scenario, international business firms have flourished. However, the literature in this region has been widely unexamined. Therefore, it is critical to fill this theoretical gap, especially considering that Latin America is starting to become a global player and it possesses a different institutional context than developed markets. This paper analyzes the effect of international entrepreneurial orientation and institutionalism on international performance, for the Pacific Alliance small-to-medium enterprises (SMEs). The literature considers international entrepreneurial orientation to be a powerful managerial capability – along the resource based view- that firms can leverage to obtain a satisfactory international performance. Thereby, obtaining a competitive advantage through the correct allocation of key resources to exploit the capabilities here involved. Entrepreneurial Orientation is defined around five factors: innovation, proactiveness, risk-taking, competitive aggressiveness, and autonomy. Nevertheless, the institutional environment – both local and foreign, adversely affects International Performance; this is especially the case for emerging markets with uncertain scenarios. In this way, the study analyzes an Entrepreneurial Orientation, key endogenous variable of international performance, and Institutionalism, an exogenous variable. The survey data consists of Pacific Alliance SMEs that have foreign operations in at least another country in the trade bloc. Findings are still in an ongoing research process. Later, the study will undertake a structural equation modeling (SEM) using the variance-based partial least square estimation procedure. The software that is going to be used is the SmartPLS. This research contributes to the theoretical discussion of a largely postponed topic: SMEs in Latin America, that has had limited academic research. Also, it has practical implication for decision-makers and policy-makers, providing insights into what is behind international performance.Keywords: institutional theory, international entrepreneurial orientation, international performance, SMEs, Pacific Alliance
Procedia PDF Downloads 248448 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique
Authors: Thu Nhi Tran Caliste
Abstract:
X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.Keywords: rocking curve imaging, X-ray diffraction, defect, distortion
Procedia PDF Downloads 131447 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 147446 Intensification of Heat Transfer Using AL₂O₃-Cu/Water Hybrid Nanofluid in a Circular Duct Using Inserts
Authors: Muluken Biadgelegn Wollele, Mebratu Assaye Mengistu
Abstract:
Nanotechnology has created new opportunities for improving industrial efficiency and performance. One of the proposed approaches to improving the effectiveness of temperature exchangers is the use of nanofluids to improve heat transfer performance. The thermal conductivity of nanoparticles, as well as their size, diameter, and volume concentration, all played a role in influencing the rate of heat transfer. Nanofluids are commonly used in automobiles, energy storage, electronic component cooling, solar absorbers, and nuclear reactors. Convective heat transfer must be improved when designing thermal systems in order to reduce heat exchanger size, weight, and cost. Using roughened surfaces to promote heat transfer has been tried several times. Thus, both active and passive heat transfer methods show potential in terms of heat transfer improvement. There will be an added advantage of enhanced heat transfer due to the two methods adopted; however, pressure drop must be considered during flow. Thus, the current research aims to increase heat transfer by adding a twisted tap insert in a plain tube using a working fluid hybrid nanofluid (Al₂O₃-Cu) with a base fluid of water. A circular duct with inserts, a tube length of 3 meters, a hydraulic diameter of 0.01 meters, and tube walls with a constant heat flux of 20 kW/m² and a twist ratio of 125 was used to investigate Al₂O₃-Cu/H₂O hybrid nanofluid with inserts. The temperature distribution is better than with conventional tube designs due to stronger tangential contact and swirls in the twisted tape. The Nusselt number values of plain twisted tape tubes are 1.5–2.0 percent higher than those of plain tubes. When twisted tape is used instead of plain tube, performance evaluation criteria improve by 1.01 times. A heat exchanger that is useful for a number of heat exchanger applications can be built utilizing a mixed flow of analysis that incorporates passive and active methodologies.Keywords: nanofluids, active method, passive method, Nusselt number, performance evaluation criteria
Procedia PDF Downloads 74445 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values
Authors: Gulcin Yildiz, Hao Feng
Abstract:
Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levelsKeywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication
Procedia PDF Downloads 319444 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis
Authors: Claudia Aguilar, Amen Farooq
Abstract:
Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts
Procedia PDF Downloads 85