Search results for: shore protective structures
220 Chemical Study and Cytotoxic Activity of Extracts from Erythroxylum Genus against HeLa Cells
Authors: Richele P. Severino, Maria M. F. Alchaar, Lorena R. F. De Sousa, Patrik S. Vital, Ana G. Silva, Rosy I. M. A. Ribeiro
Abstract:
Recognized as a global biodiversity hotspot, the Cerrado (Brazil) presents an extreme abundance of endemic species and it is considered to be one of the biologically richest tropical savanna regions in the world. Erythroxylum genus is found in Cerrado and chemically is characterized by the presence of tropane alkaloids, among them cocaine, a natural alkaloid produced by Erythroxylum coca Lam., which was used as a local anesthetic in small surgeries. However, cocaine gained notoriety due to its psychoactive activity in the Central Nervous System (CNS), becoming one of the major problems of public health today. Some species of Erythroxylum are referred to in the literature as having pharmacological potential, which provide alkaloids, terpenoids, and flavonoids. E. vacciniifolium Mart., commonly known as 'catuaba', is used as a central nervous system stimulant and has aphrodisiac properties and E. pelleterianum A. St.-Hil. in the treatment of stomach pains. Already E. myrsinites Mart. and E. suberosum A. St.-Hil. are used in the tannery industry. Species of Erythroxylum are also used in folk medicine for various diseases, against diabetes, antiviral, fungicidal, cytotoxicity, among others. The Cerrado is recognized as the richer savannah in the world in biodiversity but little explored from the chemical view. In our on-going study of the chemistry of Erythroxylum genus, we have investigated four specimens collected in central Cerrado of Brazil: E. campestre (EC), E. deciduum (ED), E. suberosum (ES) and E. tortuosum (ET). The cytotoxic activity of extracts was evaluated using HeLa cells, in vitro assays. The chemical investigation was performed preparing the extracts using n-hexane (H), dichloromethane (D), ethyl acetate (E) and methanol (M). The cells were treated with increasing concentrations of extracts (50, 75 and 100 μg/mL) diluted in DMSO (1%) and DMEM (0.5% FBS and 1% P/S). The IC₅₀ values were determined measured spectrophotometrically at 570 nm, after incubation of HeLa cell line for 48 hours using the MTT (SIGMA M5655), and calculated by nonlinear regression analysis using GraphPad Prism software. All the assays were done in triplicate and repeated at least two times. The cytotoxic assays showed some promising results with IC₅₀ values less than 100 μg/mL (ETD = 38.5 μg/mL; ETM = 92.3 μg/mL; ESM = 67.8 μg/mL; ECD = 24.0 μg/mL; ECM = 32.9; EDA = 44.2 μg/mL). The chemical profile study of ethyl acetate (E) and methanolic (M) extracts of E. tortuosum leaves was performed by LC-MS, and the structures of the compounds were determined by analysis of ¹H, HSQC and HMBC spectra, and confirmed by comparison with the literature data. The investigation led to six substances: α-amyrin, β-amyrin, campesterol, stigmastan-3,5-diene, β-sitosterol and 7,4’-di-O-methylquercetin-3-O-β-rutinoside, with flavonoid the major compound of extracts. By alkaline extraction of the methanolic extract, it was possible to identify three alkaloids: tropacocaine, cocaine and 6-methoxy-8-methyl-8-azabicyclo[3.2.1]octan-3-ol. The results obtained are important for the chemical knowledge of the Cerrado biodiversity and brought a contribution to the chemistry of Erythroxylum genus.Keywords: cytotoxicity, Erythroxylum, chemical profile, secondary metabolites
Procedia PDF Downloads 145219 Sustainability and Smart Cities Planning in Contrast with City Humanity. Human Scale and City Soul (Neighbourhood Scale)
Authors: Ghadir Hummeid
Abstract:
Undoubtedly, our world is leading all the purposes and efforts to achieve sustainable development in life in all respects. Sustainability has been regarded as a solution to many challenges of our world today, materiality and immateriality. With the new consequences and challenges our world today, such as global climate change, the use of non-renewable resources, environmental pollution, the decreasing of urban health, the urban areas’ aging, the highly increasing migrations into urban areas linked to many consequences such as highly infrastructure density, social segregation. All of that required new forms of governance, new urban policies, and more efficient efforts and urban applications. Based on the fact that cities are the core of life and it is a fundamental life axis, their development can increase or decrease the life quality of their inhabitants. Architects and planners see themselves today in the need to create new approaches and new sustainable policies to develop urban areas to correspond with the physical and non-physical transformations that cities are nowadays experiencing. To enhance people's lives and provide for their needs in this present without compromising the needs and lives of future generations. The application of sustainability has become an inescapable part of the development and projections of cities' planning. Yet its definition has been indefinable due to the plurality and difference of its applications. As the conceptualizations of technology are arising and have dominated all life aspects today, from smart citizens and smart life rhythms to smart production and smart structures to smart frameworks, it has influenced the sustainability applications as well in the planning and urbanization of cities. The term "smart city" emerged from this influence as one of the possible key solutions to sustainability. The term “smart city” has various perspectives of applications and definitions in the literature and in urban applications. However, after the observation of smart city applications in current cities, this paper defined the smart city as an urban environment that is controlled by technologies yet lacks the physical architectural representation of this smartness as the current smart applications are mostly obscured from the public as they are applied now on a diminutive scale and highly integrated into the built environment. Regardless of the importance of these technologies in improving the quality of people's lives and in facing cities' challenges, it is important not to neglect their architectural and urban presentations will affect the shaping and development of city neighborhoods. By investigating the concept of smart cities and exploring its potential applications on a neighbourhood scale, this paper aims to shed light on understanding the challenges faced by cities and exploring innovative solutions such as smart city applications in urban mobility and how they affect the different aspects of communities. The paper aims to shape better articulations of smart neighborhoods’ morphologies on the social, architectural, functional, and material levels. To understand how to create more sustainable and liveable future approaches to developing urban environments inside cities. The findings of this paper will contribute to ongoing discussions and efforts in achieving sustainable urban development.Keywords: sustainability, urban development, smart city, resilience, sense of belonging
Procedia PDF Downloads 79218 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 25217 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication
Authors: Farhan A. Alenizi
Abstract:
Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing
Procedia PDF Downloads 160216 Exploration of Barriers and Challenges to Innovation Process for SMEs: Possibilities to Promote Cooperation Between Scientific and Business Institutions to Address it
Authors: Indre Brazauskaite, Vilte Auruskeviciene
Abstract:
Significance of the study is outlined through current strategic management challenges faced by SMEs. First, innovation is recognized as competitive advantage in the market, having ever changing market conditions. It is of constant interest from both practitioners and academics to capture and capitalize on business opportunities or mitigate the foreseen risks. Secondly, it is recognized that integrated system is needed for proper implementation of innovation process, especially during the period of business incubation, associated with relatively high risks of new product failure. Finally, ability to successful commercialize innovations leads to tangible business results that allow to grow organizations further. This is particularly relevant to SMEs due to limited structures, resources, or capabilities. Cooperation between scientific and business institutions could be a tool of mutual interest to observe, address, and further develop innovations during the incubation period, which is the most demanding and challenging during the innovation process. Material aims to address the following problematics: i) indicate the major barriers and challenges in innovation process that SMEs are facing, ii) outline the possibilities for these barriers and challenges to be addressed by cooperation between scientific and business institutions. Basis for this research is stage-by-stage integrated innovation management process which presents existing challenges and needed aid in operational decision making. The stage-by-stage innovation management process exploration highlights relevant research opportunities that have high practical relevance in the field. It is expected to reveal the possibility for business incubation programs that could combine interest from both – practices and academia. Methodology. Scientific meta-analysis of to-date scientific literature that explores innovation process. Research model is built on the combination of stage-gate model and lean six sigma approach. It outlines the following steps: i) pre-incubation (discovery and screening), ii) incubation (scoping, planning, development, and testing), and iii) post-incubation (launch and commercialization) periods. Empirical quantitative research is conducted to address barriers and challenges related to innovation process among SMEs that limits innovations from successful launch and commercialization and allows to identify potential areas for cooperation between scientific and business institutions. Research sample, high level decision makers representing trading SMEs, are approached with structured survey based on the research model to investigate the challenges associated with each of the innovation management step. Expected findings. First, the current business challenges in the innovation process are revealed. It will outline strengths and weaknesses of innovation management practices and systems across SMEs. Secondly, it will present material for relevant business case investigation for scholars to serve as future research directions. It will contribute to a better understanding of quality innovation management systems. Third, it will contribute to the understanding the need for business incubation systems for mutual contribution from practices and academia. It can increase relevance and adaptation of business research.Keywords: cooperation between scientific and business institutions, innovation barriers and challenges, innovation measure, innovation process, SMEs
Procedia PDF Downloads 150215 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects
Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost
Abstract:
A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet
Procedia PDF Downloads 367214 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions
Authors: Guo Bingkun
Abstract:
With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.Keywords: urban housing, urban planning, housing prices, comparative study
Procedia PDF Downloads 50213 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 124212 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation
Authors: Robert Rorie, Lisa Fern
Abstract:
The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems
Procedia PDF Downloads 272211 Development and Evaluation of Economical Self-cleaning Cement
Authors: Anil Saini, Jatinder Kumar Ratan
Abstract:
Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination
Procedia PDF Downloads 170210 Conceptualizing a Biomimetic Fablab Based on the Makerspace Concept and Biomimetics Design Research
Authors: Petra Gruber, Ariana Rupp, Peter Niewiarowski
Abstract:
This paper presents a concept for a biomimetic fablab as a physical space for education, research and development of innovation inspired by nature. Biomimetics as a discipline finds increasing recognition in academia and has started to be institutionalized at universities in programs and centers. The Biomimicry Research and Innovation Center was founded in 2012 at the University of Akron as an interdisciplinary venture for the advancement of innovation inspired by nature and is part of a larger community fostering the approach of bioimimicry in the Great Lakes region of the US. With 30 faculty members the center has representatives from Colleges of Arts and Sciences (e.g., biology, chemistry, geoscience, and philosophy) Engineering (e.g., mechanical, civil, and biomedical), Polymer Science, and Myers School of Arts. A platform for training PhDs in Biomimicry (17 students currently enrolled) is co-funded by educational institutions and industry partners. Research at the center touches on many areas but is also currently biased towards materials and structures, with highlights being materials based on principles found in spider silk and gecko attachment mechanisms. As biomimetics is also a novel scientific discipline, there is little standardisation in programming and the equipment of research facilities. As a field targeting innovation, design and prototyping processes are fundamental parts of the developments. For experimental design and prototyping, MIT's maker space concept seems to fit well to the requirements, but facilities need to be more specialised in terms of accessing biological systems and knowledge, specific research, production or conservation requirements. For the education and research facility BRIC we conceptualize the concept of a biomimicry fablab, that ties into the existing maker space concept and creates the setting for interdisciplinary research and development carried out in the program. The concept takes on the process of biomimetics as a guideline to define core activities that shall be enhanced by the allocation of specific spaces and tools. The limitations of such a facility and the intersections to further specialised labs housed in the classical departments are of special interest. As a preliminary proof of concept two biomimetic design courses carried out in 2016 are investigated in terms of needed tools and infrastructure. The spring course was a problem based biomimetic design challenge in collaboration with an innovation company interested in product design for assisted living and medical devices. The fall course was a solution based biomimetic design course focusing on order and hierarchy in nature with the goal of finding meaningful translations into art and technology. The paper describes the background of the BRIC center, identifies and discusses the process of biomimetics, evaluates the classical maker space concept and explores how these elements can shape the proposed research facility of a biomimetic fablab by examining two examples of design courses held in 2016.Keywords: biomimetics, biomimicry, design, biomimetic fablab
Procedia PDF Downloads 295209 Hydrocarbons and Diamondiferous Structures Formation in Different Depths of the Earth Crust
Authors: A. V. Harutyunyan
Abstract:
The investigation results of rocks at high pressures and temperatures have revealed the intervals of changes of seismic waves and density, as well as some processes taking place in rocks. In the serpentinized rocks, as a consequence of dehydration, abrupt changes in seismic waves and density have been recorded. Hydrogen-bearing components are released which combine with carbon-bearing components. As a result, hydrocarbons formed. The investigated samples are smelted. Then, geofluids and hydrocarbons migrate into the upper horizons of the Earth crust by the deep faults. Then their differentiation and accumulation in the jointed rocks of the faults and in the layers with collecting properties takes place. Under the majority of the hydrocarbon deposits, at a certain depth, magmatic centers and deep faults are recorded. The investigation results of the serpentinized rocks with numerous geological-geophysical factual data allow understanding that hydrocarbons are mainly formed in both the offshore part of the ocean and at different depths of the continental crust. Experiments have also shown that the dehydration of the serpentinized rocks is accompanied by an explosion with the instantaneous increase in pressure and temperature and smelting the studied rocks. According to numerous publications, hydrocarbons and diamonds are formed in the upper part of the mantle, at the depths of 200-400km, and as a consequence of geodynamic processes, they rise to the upper horizons of the Earth crust through narrow channels. However, the genesis of metamorphogenic diamonds and the diamonds found in the lava streams formed within the Earth crust, remains unclear. As at dehydration, super high pressures and temperatures arise. It is assumed that diamond crystals are formed from carbon containing components present in the dehydration zone. It can be assumed that besides the explosion at dehydration, secondary explosions of the released hydrogen take place. The process is naturally accompanied by seismic phenomena, causing earthquakes of different magnitudes on the surface. As for the diamondiferous kimberlites, it is well-known that the majority of them are located within the ancient shield and platforms not obligatorily connected with the deep faults. The kimberlites are formed at the shallow location of dehydrated masses in the Earth crust. Kimberlites are younger in respect of containing ancient rocks containing serpentinized bazites and ultrbazites of relicts of the paleooceanic crust. Sometimes, diamonds containing water and hydrocarbons showing their simultaneous genesis are found. So, the geofluids, hydrocarbons and diamonds, according to the new concept put forward, are formed simultaneously from serpentinized rocks as a consequence of their dehydration at different depths of the Earth crust. Based on the concept proposed by us, we suggest discussing the following: -Genesis of gigantic hydrocarbon deposits located in the offshore area of oceans (North American, Mexican Gulf, Cuanza-Kamerunian, East Brazilian etc.) as well as in the continental parts of different mainlands (Kanadian-Arctic Caspian, East Siberian etc.) - Genesis of metamorphogenic diamonds and diamonds in the lava streams (Guinea-Liberian, Kokchetav, Kanadian, Kamchatka-Tolbachinian, etc.).Keywords: dehydration, diamonds, hydrocarbons, serpentinites
Procedia PDF Downloads 340208 Case Report: Ocular Helminth – In Unusual Site (Lens)
Authors: Chandra Shekhar Majumder, Shamsul Haque, Khondaker Anower Hossain, Rafiqul Islam
Abstract:
Introduction: Ocular helminths are parasites that infect the eye or its adnexa. They can be either motile worms or sessile worms that form cysts. These parasites require two hosts for their life cycle, a definite host (usually a human) and an intermediate host (usually an insect). While there have been reports of ocular helminths infecting various structures of the eye, including the anterior chamber and subconjunctival space, there is no previous record of such a case involving the lens. Research Aim: The aim of this case report is to present a rare case of ocular helminth infection in the lens and to contribute to the understanding of this unusual site of infection. Methodology: This study is a case report, presenting the details and findings of an 80-year-old retired policeman who presented with severe pain, redness, and vision loss in the left eye. The examination revealed the presence of a thread-like helminth in the lens. The data for this case report were collected through clinical examination and medical records of the patient. The findings were described and presented in a descriptive manner. No statistical analysis was conducted. Case report: An 80-year-old retired policeman attended the OPD, Faridpur Medical College Hospital with the complaints of severe pain, redness and gross dimness of vision of the left eye for 5 days. He had a history of diabetes mellitus and hypertension for 3 years. On examination, L/E visual acuity was PL only, moderate ciliary congestion, KP 2+, cells 2+ and posterior synechia from 5 to 7 O’clock position was found. Lens was opaque. A thread like helminth was found under the anterior of the lens. The worm was moving and changing its position during examination. On examination of R/E, visual acuity was 6/36 unaided, 6/18 with pinhole. There was lental opacity. Slit-lamp and fundus examination were within normal limit. Patient was admitted in Faridpur Medical College Hospital. Diabetes mellitus was controlled with insulin. ICCE with PI was done on the same day of admission under depomedrol coverage. The helminth was recovered from the lens. It was thread like, about 5 to 6 mm in length, 1 mm in width and pinkish in colour. The patient followed up after 7 days, VA was HM, mild ciliary congestion, few KPs and cells were present. Media was hazy due to vitreous opacity. The worm was sent to the department of Parasitology, NIPSOM, Dhaka for identification. Theoretical Importance: This case report contributes to the existing literature on ocular helminth infections by reporting a unique case involving the lens. It highlights the need for further research to understand the mechanism of entry of helminths in the lens. Conclusion: To the best of our knowledge, this is the first reported case of ocular helminth infection in the lens. The presence of the helminth in the lens raises interesting questions regarding its pathogenesis and entry mechanism. Further study and research are needed to explore these aspects. Ophthalmologists and parasitologists should be aware of the possibility of ocular helminth infections in unusual sites like the lens.Keywords: helminth, lens, ocular, unusual
Procedia PDF Downloads 45207 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India
Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit
Abstract:
Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique
Procedia PDF Downloads 127206 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance
Authors: Aleksandra Czubek
Abstract:
As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.Keywords: ip, technology, copyright, data, infringement, comparative analysis
Procedia PDF Downloads 18205 Photoluminescence of Barium and Lithium Silicate Glasses and Glass Ceramics Doped with Rare Earth Ions
Authors: Augustas Vaitkevicius, Mikhail Korjik, Eugene Tretyak, Ekaterina Trusova, Gintautas Tamulaitis
Abstract:
Silicate materials are widely used as luminescent materials in amorphous and crystalline phase. Lithium silicate glass is popular for making neutron sensitive scintillation glasses. Cerium-doped single crystalline silicates of rare earth elements and yttrium have been demonstrated to be good scintillation materials. Due to their high thermal and photo-stability, silicate glass ceramics are supposed to be suitable materials for producing light converters for high power white light emitting diodes. In this report, the influence of glass composition and crystallization on photoluminescence (PL) of different silicate glasses was studied. Barium (BaO-2SiO₂) and lithium (Li₂O-2SiO₂) glasses were under study. Cerium, dysprosium, erbium and europium ions as well as their combinations were used for doping. The influence of crystallization was studied after transforming the doped glasses into glass ceramics by heat treatment in the temperature range of 550-850 degrees Celsius for 1 hour. The study was carried out by comparing the photoluminescence (PL) spectra, spatial distributions of PL parameters and quantum efficiency in the samples under study. The PL spectra and spatial distributions of their parameters were obtained by using confocal PL microscopy. A WITec Alpha300 S confocal microscope coupled with an air cooled CCD camera was used. A CW laser diode emitting at 405 nm was exploited for excitation. The spatial resolution was in sub-micrometer domain in plane and ~1 micrometer perpendicularly to the sample surface. An integrating sphere with a xenon lamp coupled with a monochromator was used to measure the external quantum efficiency. All measurements were performed at room temperature. Chromatic properties of the light emission from the glasses and glass ceramics have been evaluated. We observed that the quantum efficiency of the glass ceramics is higher than that of the corresponding glass. The investigation of spatial distributions of PL parameters revealed that heat treatment of the glasses leads to a decrease in sample homogeneity. In the case of BaO-2SiO₂: Eu, 10 micrometer long needle-like objects are formed, when transforming the glass into glass ceramics. The comparison of PL spectra from within and outside the needle-like structure reveals that the ratio between intensities of PL bands associated with Eu²⁺ and Eu³⁺ ions is larger in the bright needle-like structures. This indicates a higher degree of crystallinity in the needle-like objects. We observed that the spectral positions of the PL bands are the same in the background and the needle-like areas, indicating that heat treatment imposes no significant change to the valence state of the europium ions. The evaluation of chromatic properties confirms applicability of the glasses under study for fabrication of white light sources with high thermal stability. The ability to combine barium and lithium glass matrixes and doping by Eu, Ce, Dy, and Tb enables optimization of chromatic properties.Keywords: glass ceramics, luminescence, phosphor, silicate
Procedia PDF Downloads 317204 Segmentation along the Strike-slip Fault System of the Chotts Belt, Southern Tunisia
Authors: Abdelkader Soumaya, Aymen Arfaoui, Noureddine Ben Ayed, Ali Kadri
Abstract:
The Chotts belt represents the southernmost folded structure in the Tunisian Atlas domain. It is dominated by inherited deep extensional E-W trending fault zones, which are reactivated as strike-slip faults during the Cenozoic compression. By examining the geological maps at different scales and based on the fieldwork data, we propose new structural interpretations for the geometries and fault kinematics in the Chotts chain. A set of ENE-WSW right-lateral en echelon folds, with curved shapes and steeply inclined southern limbs, is visible in the map view of this belt. These asymmetric tight anticlines are affected by E-W trending fault segments linked by local bends and stepovers. The revealed kinematic indicators along one of these E-W striated faults (Tafferna segment), such as breccias and gently inclined slickenlines (N094, 80N, 15°W pitch angles), show direct evidence of dextral strike-slip movement. The calculated stress tensors from corresponding faults slip data reveal an overall strike-slip tectonic regime with reverse component and NW-trending sub-horizontal σ1 axis ranking between N130 to N150. From west to east, we distinguished several types of structures along the segmented dextral fault system of the Chotts Range. The NE-SW striking fold-thrust belt (~25 km-long) between two continuously linked E-W fault segments (NW of Tozeur town) has been suggested as a local restraining bend. The central part of the Chotts chain is occupied by the ENE-striking Ksar Asker anticlines (Taferna, Torrich, and Sif Laham), which are truncated by a set of E-W strike-slip fault segments. Further east, the fault segments of Hachichina and Sif Laham connected across the NW-verging asymmetric fold-thrust system of Bir Oum Ali, which can be interpreted as a left-stepping contractional bend (~20 km-long). The oriental part of the Chotts belt corresponds to an array of subparallel E-W oriented fault segments (i.e., Beidha, Bouloufa, El Haidoudi-Zemlet El Beidha) with similar lengths (around 10 km). Each of these individual separated segments is associated with curved ENE-trending en echelon right-stepping anticlines. These folds are affected by a set of conjugate R and R′ shear-type faults indicating a dextral strike-lip motion. In addition, the relay zones between these E-W overstepping fault segments define local releasing stepovers dominated by NW-SE subsidiary faults. Finally, the Chotts chain provides well-exposed examples of strike-slip tectonics along E-W distributed fault segments. Each fault zone shows a typical strike-slip architecture, including parallel fault segments connecting via local stepovers or bends. Our new structural interpretations for this region reveal a great influence of the E-W deep fault segments on regional tectonic deformations and stress field during the Cenozoic shortening.Keywords: chotts belt, tunisian atlas, strike-slip fault, stepovers, fault segments
Procedia PDF Downloads 69203 Post Harvest Fungi Diversity and Level of Aflatoxin Contamination in Stored Maize: Cases of Kitui, Nakuru and Trans-Nzoia Counties in Kenya
Authors: Gachara Grace, Kebira Anthony, Harvey Jagger, Wainaina James
Abstract:
Aflatoxin contamination of maize in Africa poses a major threat to food security and the health of many African people. In Kenya, aflatoxin contamination of maize is high due to the environmental, agricultural and socio-economic factors. Many studies have been conducted to understand the scope of the problem, especially at pre-harvest level. This research was carried out to gather scientific information on the fungi population, diversity and aflatoxin level during the post-harvest period. The study was conducted in three geographical locations of; Kitui, Kitale and Nakuru. Samples were collected from storage structures of farmers and transported to the Biosciences eastern and central Africa (BecA), International Livestock and Research Institute (ILRI) hub laboratories. Mycoflora was recovered using the direct plating method. A total of five fungal genera (Aspergillus, Penicillium, Fusarium, Rhizopus and Bssyochlamys spp.) were isolated from the stored maize samples. The most common fungal species that were isolated from the three study sites included A. flavus at 82.03% followed by A.niger and F.solani at 49% and 26% respectively. The aflatoxin producing fungi A. flavus was recovered in 82.03% of the samples. Aflatoxin levels were analysed on both the maize samples and in vitro. Most of the A. flavus isolates recorded a high level of aflatoxin when they were analysed for presence of aflatoxin B1 using ELISA. In Kitui, all the samples (100%) had aflatoxin levels above 10ppb with a total aflatoxin mean of 219.2ppb. In Kitale, only 3 samples (n=39) had their aflatoxin levels less than 10ppb while in Nakuru, the total aflatoxin mean level of this region was 239.7ppb. When individual samples were analysed using Vicam fluorometer method, aflatoxin analysis revealed that most of the samples (58.4%) had been contaminated. The means were significantly different (p=0.00<0.05) in all the three locations. Genetic relationships of A. flavus isolates were determined using 13 Simple Sequence Repeats (SSRs) markers. The results were used to generate a phylogenetic tree using DARwin5 software program. A total of 5 distinct clusters were revealed among the genotypes. The isolates appeared to cluster separately according to the geographical locations. Principal Coordinates Analysis (PCoA) of the genetic distances among the 91 A. flavus isolates explained over 50.3% of the total variation when two coordinates were used to cluster the isolates. Analysis of Molecular Variance (AMOVA) showed a high variation of 87% within populations and 13% among populations. This research has shown that A. flavus is the main fungal species infecting maize grains in Kenya. The influence of aflatoxins on human populations in Kenya demonstrates a clear need for tools to manage contamination of locally produced maize. Food basket surveys for aflatoxin contamination should be conducted on a regular basis. This would assist in obtaining reliable data on aflatoxin incidence in different food crops. This would go a long way in defining control strategies for this menace.Keywords: aflatoxin, Aspergillus flavus, genotyping, Kenya
Procedia PDF Downloads 277202 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework
Authors: Livia Calcagni
Abstract:
Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels
Procedia PDF Downloads 86201 Neighborhood-Scape as a Methodology for Enhancing Gulf Region Cities' Quality of Life: Case of Doha, Qatar
Authors: Eman AbdelSabour
Abstract:
Sustainability is increasingly being considered as a critical aspect in shaping the urban environment. It works as an invention development basis for global urban growth. Currently, different models and structures impact the means of interpreting the criteria that would be included in defining a sustainable city. There is a collective need to improve the growth path to an extremely durable path by presenting different suggestions regarding multi-scale initiatives. The global rise in urbanization has led to increased demand and pressure for better urban planning choice and scenarios for a better sustainable urban alternative. The need for an assessment tool at the urban scale was prompted due to the trend of developing increasingly sustainable urban development (SUD). The neighborhood scale is being managed by a growing research committee since it seems to be a pertinent scale through which economic, environmental, and social impacts could be addressed. Although neighborhood design is a comparatively old practice, it is in the initial years of the 21st century when environmentalists and planners started developing sustainable assessment at the neighborhood level. Through this, urban reality can be considered at a larger scale whereby themes which are beyond the size of a single building can be addressed, while it still stays small enough that concrete measures could be analyzed. The neighborhood assessment tool has a crucial role in helping neighborhood sustainability to perform approach and fulfill objectives through a set of themes and criteria. These devices are also known as neighborhood assessment tool, district assessment tool, and sustainable community rating tool. The primary focus of research has been on sustainability from the economic and environmental aspect, whereas the social, cultural issue is rarely focused. Therefore, this research is based on Doha, Qatar, the current urban conditions of the neighborhoods is discussed in this study. The research problem focuses on the spatial features in relation to the socio-cultural aspects. This study is outlined in three parts; the first section comprises of review of the latest use of wellbeing assessment methods to enhance decision process of retrofitting physical features of the neighborhood. The second section discusses the urban settlement development, regulations and the process of decision-making rule. An analysis of urban development policy with reference to neighborhood development is also discussed in this section. Moreover, it includes a historical review of the urban growth of the neighborhoods as an atom of the city system present in Doha. Last part involves developing quantified indicators regarding subjective well-being through a participatory approach. Additionally, applying GIS will be utilized as a visualizing tool for the apparent Quality of Life (QoL) that need to develop in the neighborhood area as an assessment approach. Envisaging the present QoL situation in Doha neighborhoods is a process to improve current condition neighborhood function involves many days to day activities of the residents, due to which areas are considered dynamic.Keywords: neighborhood, subjective wellbeing, decision support tools, Doha, retrofiring
Procedia PDF Downloads 138200 Exploring Bio-Inspired Catecholamine Chemistry to Design Durable Anti-Fungal Wound Dressings
Authors: Chetna Dhand, Venkatesh Mayandi, Silvia Marrero Diaz, Roger W. Beuerman, Seeram Ramakrishna, Rajamani Lakshminarayanan
Abstract:
Sturdy Insect Cuticle Sclerotization, Incredible Substrate independent Mussel’s bioadhesion, Tanning of Leather are some of catechol(amine)s mediated natural processes. Chemical contemplation spots toward a mechanism instigated with the formation of the quinone moieties from the respective catechol(amine)s, via oxidation, followed by the nucleophilic addition of the amino acids/proteins/peptides to this quinone leads to the development of highly strong, cross-linked and water-resistant proteinacious structures. Inspired with this remarkable catechol(amine)s chemistry towards amino acids/proteins/peptides, we attempted to design highly stable and water-resistant antifungal wound dressing mats with exceptional durability using collagen (protein), dopamine (catecholamine) and antifungal drugs (Amphotericin B and Caspofungin) as the key materials. Electrospinning technique has been used to fabricate desired nanofibrous mat including Collagen (COLL), COLL/Dopamine (COLL/DP) and calcium incorporated COLL/DP (COLL-DP-Ca2+). The prepared protein-based scaffolds have been studied for their microscopic investigations (SEM, TEM, and AFM), structural analysis (FT-IR), mechanical properties, water wettability characteristics and aqueous stability. Biocompatibility of these scaffolds has been analyzed for dermal fibroblast cells using MTS assay, Cell TrackerTM Green CMFDA and confocal imaging. Being the winner sample, COLL-DP-Ca2+ scaffold has been selected for incorporating two antifungal drugs namely Caspofungin (Peptide based) and Amphotericin B (Non-Peptide based). Antifungal efficiency of the designed mats has been evaluated for eight diverse fungal strains employing different microbial assays including disc diffusion, cell-viability assay, time kill kinetics etc. To confirm the durability of these mats, in term of their antifungal activity, drug leaching studies has been performed and monitored using disc diffusion assay each day. Ex-vivo fungal infection model has also been developed and utilized to validate the antifungal efficacy of the designed wound dressings. Results clearly reveal dopamine mediated crosslinking within COLL-antifungal scaffolds that leads to the generation of highly stable, mechanical tough, biocompatible wound dressings having the zone of inhabitation of ≥ 2 cm for almost all the investigated fungal strains. Leaching studies and Ex-vivo model has confirmed the durability of these wound dressing for more than 3 weeks and certified their suitability for commercialization. A model has also been proposed to enlighten the chemical mechanism involved for the development of these antifungal wound dressings with exceptional robustness.Keywords: catecholamine chemistry, electrospinning technique, antifungals, wound dressings, collagen
Procedia PDF Downloads 377199 The United States Film Industry and Its Impact on Latin American Identity Rationalizations
Authors: Alfonso J. García Osuna
Abstract:
Background and Significance: The objective of this paper is to analyze the inception and development of identity archetypes in early XX century Latin America, to explore their roots in United States culture, to discuss the influences that came to bear upon Latin Americans as the United States began to export images of standard identity paradigms through its film industry, and to survey how these images evolved and impacted Latin Americans’ ideas of national distinctiveness from the early 1900s to the present. Therefore, the general hypothesis of this work is that United States film in many ways influenced national identity patterning in its neighbors, especially in those nations closest to its borders, Cuba and Mexico. Very little research has been done on the social impact of the United States film industry on the country’s southern neighbors. From a historical perspective, the US’s influence has been examined as the projection of political and economic power, that is to say, that American influence is seen as a catalyst to align the forces that the US wants to see wield the power of the State. But the subtle yet powerful cultural influence exercised by film, the eminent medium for exporting ideas and ideals in the XX century, has not been significantly explored. Basic Methodologies and Description: Gramscian Marxist theory underpins the study, where it is argued that film, as an exceptional vehicle for culture, is an important site of political and social struggle; in this context, it aims to show how United States capitalist structures of power not only use brute force to generate and maintain control of overseas markets, but also promote their ideas through artistic products such as film in order to infiltrate the popular culture of subordinated peoples. In this same vein, the work of neo-Marxist theoreticians of popular culture is employed in order to contextualize the agency of subordinated peoples in the process of cultural assimilations. Indication of the Major Findings of the Study: The study has yielded much data of interest. The salient finding is that each particular nation receives United States film according to its own particular social and political context, regardless of the amount of pressure exerted upon it. An example of this is the unmistakable dissimilarity between Cuban and Mexican reception of US films. The positive reception given in Cuba to American film has to do with the seamless acceptance of identity paradigms that, for historical reasons discussed herein, were incorporated into the national identity grid quite unproblematically. Such is not the case with Mexico, whose express rejection of identity paradigms offered by the United States reflects not only past conflicts with the northern neighbor, but an enduring recognition of the country’s indigenous roots, one that precluded such paradigms. Concluding Statement: This paper is an endeavor to elucidate the ways in which US film contributed to the outlining of Latin American identity blueprints, offering archetypes that would be accepted or rejected according to each nation’s particular social requirements, constraints and ethnic makeup.Keywords: film studies, United States, Latin America, identity studies
Procedia PDF Downloads 298198 Relationship of Entrepreneurial Ecosystem Factors and Entrepreneurial Cognition: An Exploratory Study Applied to Regional and Metropolitan Ecosystems in New South Wales, Australia
Authors: Sumedha Weerasekara, Morgan Miles, Mark Morrison, Branka Krivokapic-Skoko
Abstract:
This paper is aimed at exploring the interrelationships among entrepreneurial ecosystem factors and entrepreneurial cognition in regional and metropolitan ecosystems. Entrepreneurial ecosystem factors examined include: culture, infrastructure, access to finance, informal networks, support services, access to universities, and the depth and breadth of the talent pool. Using a multivariate approach we explore the impact of these ecosystem factors or elements on entrepreneurial cognition. In doing so, the existing body of knowledge from the literature on entrepreneurial ecosystem and cognition have been blended to explore the relationship between entrepreneurial ecosystem factors and cognition in a way not hitherto investigated. The concept of the entrepreneurial ecosystem has received increased attention as governments, universities and communities have started to recognize the potential of integrated policies, structures, programs and processes that foster entrepreneurship activities by supporting innovation, productivity and employment growth. The notion of entrepreneurial ecosystems has evolved and grown with the advancement of theoretical research and empirical studies. Importance of incorporating external factors like culture, political environment, and the economic environment within a single framework will enhance the capacity of examining the whole systems functionality to better understand the interaction of the entrepreneurial actors and factors within a single framework. The literature on clusters underplays the role of entrepreneurs and entrepreneurial management in creating and co-creating organizations, markets, and supporting ecosystems. Entrepreneurs are only one actor following a limited set of roles and dependent upon many other factors to thrive. As a consequence, entrepreneurs and relevant authorities should be aware of the other actors and factors with which they engage and rely, and make strategic choices to achieve both self and also collective objectives. The study uses stratified random sampling method to collect survey data from 12 different regions in regional and metropolitan regions of NSW, Australia. A questionnaire was administered online among 512 Small and medium enterprise owners operating their business in selected 12 regions in NSW, Australia. Data were analyzed using descriptive analyzing techniques and partial least squares - structural equation modeling. The findings show that even though there is a significant relationship between each and every entrepreneurial ecosystem factors, there is a weak relationship between most entrepreneurial ecosystem factors and entrepreneurial cognition. In the metropolitan context, the availability of finance and informal networks have the largest impact on entrepreneurial cognition while culture, infrastructure, and support services having the smallest impact and the talent pool and universities having a moderate impact on entrepreneurial cognition. Interestingly, in a regional context, culture, availability of finance, and the talent pool have the highest impact on entrepreneurial cognition, while informal networks having the smallest impact and the remaining factors – infrastructure, universities, and support services have a moderate impact on entrepreneurial cognition. These findings suggest the need for a location-specific strategy for supporting the development of entrepreneurial cognition.Keywords: academic achievement, colour response card, feedback
Procedia PDF Downloads 143197 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 148196 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction
Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz
Abstract:
Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm:195 Numerical Study of Leisure Home Chassis under Various Loads by Using Finite Element Analysis
Authors: Asem Alhnity, Nicholas Pickett
Abstract:
The leisure home industry is experiencing an increase in sales due to the rise in popularity of staycations. However, there is also a demand for improvements in thermal and structural behaviour from customers. Existing standards and codes of practice outline the requirements for leisure home design. However, there is a lack of expertise in applying Finite Element Analysis (FEA) to complex structures in this industry. As a result, manufacturers rely on standardized design approaches, which often lead to excessively engineered or inadequately designed products. This study aims to address this issue by investigating the impact of the habitation structure on chassis performance in leisure homes. The aim of this research is to comprehensively analyse the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, including both the habitation structure and the chassis, this study seeks to develop a novel framework for designing and analysing leisure homes. The objectives include material reduction, enhancing structural stability, resolving existing design issues, and developing innovative modular and wooden chassis designs. The methodology used in this research is quantitative in nature. The study utilizes FEA to analyse the performance of leisure home chassis under various loads. The analysis procedures involve running the FEA simulations on the numerical model of the leisure home chassis. Different load scenarios are applied to assess the stress and deflection performance of the chassis under various conditions. FEA is a numerical method that allows for accurate analysis of complex systems. The research utilizes flexible mesh sizing to calculate small deflections around doors and windows, with large meshes used for macro deflections. This approach aims to minimize run-time while providing meaningful stresses and deflections. Moreover, it aims to investigate the limitations and drawbacks of the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load. The findings of this study indicate that the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load overlooks the strengthening generated from the habitation structure. By employing FEA on the entire unit, it is possible to optimize stress and deflection performance while achieving material reduction and enhanced structural stability. The study also introduces innovative modular and wooden chassis designs, which show promising weight reduction compared to the existing heavily fabricated lattice chassis. In conclusion, this research provides valuable insights into the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, the study demonstrates the importance of considering the strengthening generated from the habitation structure in chassis design. The research findings contribute to advancements in material reduction, structural stability, and overall performance optimization. The novel framework developed in this study promotes sustainability, cost-efficiency, and innovation in leisure home design.Keywords: static homes, caravans, motor homes, holiday homes, finite element analysis (FEA)
Procedia PDF Downloads 101194 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats
Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina
Abstract:
Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog
Procedia PDF Downloads 68193 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico
Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos
Abstract:
Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis
Procedia PDF Downloads 152192 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model
Authors: Muhammad Karim Ahmadzai
Abstract:
Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis
Procedia PDF Downloads 118191 Controlled Nano Texturing in Silicon Wafer for Excellent Optical and Photovoltaic Properties
Authors: Deb Kumar Shah, M. Shaheer Akhtar, Ha Ryeon Lee, O-Bong Yang, Chong Yeal Kim
Abstract:
The crystalline silicon (Si) solar cells are highly renowned photovoltaic technology and well-established as the commercial solar technology. Most of the solar panels are globally installed with the crystalline Si solar modules. At the present scenario, the major photovoltaic (PV) market is shared by c-Si solar cells, but the cost of c-Si panels are still very high as compared with the other PV technology. In order to reduce the cost of Si solar panels, few necessary steps such as low-cost Si manufacturing, cheap antireflection coating materials, inexpensive solar panel manufacturing are to be considered. It is known that the antireflection (AR) layer in c-Si solar cell is an important component to reduce Fresnel reflection for improving the overall conversion efficiency. Generally, Si wafer exhibits the 30% reflection because it normally poses the two major intrinsic drawbacks such as; the spectral mismatch loss and the high Fresnel reflection loss due to the high contrast of refractive indices between air and silicon wafer. In recent years, researchers and scientists are highly devoted to a lot of researches in the field of searching effective and low-cost AR materials. Silicon nitride (SiNx) is well-known AR materials in commercial c-Si solar cells due to its good deposition and interaction with passivated Si surfaces. However, the deposition of SiNx AR is usually performed by expensive plasma enhanced chemical vapor deposition (PECVD) process which could have several demerits like difficult handling and damaging the Si substrate by plasma when secondary electrons collide with the wafer surface for AR coating. It is very important to explore new, low cost and effective AR deposition process to cut the manufacturing cost of c-Si solar cells. One can also be realized that a nano-texturing process like the growth of nanowires, nanorods, nanopyramids, nanopillars, etc. on Si wafer can provide a low reflection on the surface of Si wafer based solar cells. The above nanostructures might be enhanced the antireflection property which provides the larger surface area and effective light trapping. In this work, we report on the development of crystalline Si solar cells without using the AR layer. The Silicon wafer was modified by growing nanowires like Si nanostructures using the wet controlled etching method and directly used for the fabrication of Si solar cell without AR. The nanostructures over Si wafer were optimized in terms of sizes, lengths, and densities by changing the etching conditions. Well-defined and aligned wires like structures were achieved when the etching time is 20 to 30 min. The prepared Si nanostructured displayed the minimum reflectance ~1.64% at 850 nm with the average reflectance of ~2.25% in the wavelength range from 400-1000 nm. The nanostructured Si wafer based solar cells achieved the comparable power conversion efficiency in comparison with c-Si solar cells with SiNx AR layer. From this study, it is confirmed that the reported method (controlled wet etching) is an easy, facile method for preparation of nanostructured like wires on Si wafer with low reflectance in the whole visible region, which has greater prospects in developing c-Si solar cells without AR layer at low cost.Keywords: chemical etching, conversion efficiency, silicon nanostructures, silicon solar cells, surface modification
Procedia PDF Downloads 125