Search results for: dynamic vision sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6237

Search results for: dynamic vision sensor

1017 Using Large Databases and Interviews to Explore the Temporal Phases of Technology-Based Entrepreneurial Ecosystems

Authors: Elsie L. Echeverri-Carroll

Abstract:

Entrepreneurial ecosystems have become an important concept to explain the birth and sustainability of technology-based entrepreneurship within regions. However, as a theoretical concept, the temporal evolution of entrepreneurship systems remain underdeveloped, making it difficult to understand their dynamic contributions to entrepreneurs. This paper argues that successful technology-based ecosystems go over three cumulative spawning stages: corporate spawning, entrepreneurial spawning, and community spawning. The importance of corporate incubation in vibrant entrepreneurial ecosystems is well documented in the entrepreneurial literature. Similarly, entrepreneurial spawning processes for venture capital-backed startups are well documented in the financial literature. In contrast, there is little understanding of both the third stage of entrepreneurial spawning (when a community of entrepreneurs become a source of firm spawning) and the temporal sequence in which spawning effects occur in a region. We test this three-stage model of entrepreneurial spawning using data from two large databases on firm births—the Secretary of State (160,000 observations) and the National Establishment Time Series (NEST with 150,000 observations)—and information collected from 60 1½-hour interviews with startup founders and representatives of key entrepreneurial organizations. This temporal model is illustrated with case study of Austin, Texas ranked by the Kauffman Foundation as the number one entrepreneurial city in the United States in 2015 and 2016. The 1½-year study founded by the Kauffman Foundation demonstrates the importance of taken into consideration the temporal contributions of both large and entrepreneurial firms in understanding the factors that contribute to the birth and growth of technology-based entrepreneurial regions. More important, these learnings could offer an important road map for regions that pursue to advance their entrepreneurial ecosystems.

Keywords: entrepreneurial ecosystems, entrepreneurial industrial clusters, high-technology, temporal changes

Procedia PDF Downloads 272
1016 Design and Analysis of Semi-Active Isolation System in Low Frequency Excitation Region for Vehicle Seat to Reduce Discomfort

Authors: Andrea Tonoli, Nicola Amati, Maria Cavatorta, Reza Mirsanei, Behzad Mozaffari, Hamed Ahani, Akbar Karamihafshejani, Mohammad Ghazivakili, Mohammad Abuabiah

Abstract:

The vibrations transmitted to the drivers and passengers through vehicle seat seriously effect on the level of their attention, fatigue and physical health and reduce the comfort and efficiency of the occupants. Recently, some researchers have focused on vibrations at low excitation frequency(0.5-5 Hz) which are considered to be the main risk factors for lumbar part of the backbone but they were not applicable to A and B-segment cars regarding to the size and weight. A semi-active system with two symmetric negative stiffness structures (NSS) in parallel to a positive stiffness structure and actuators has been proposed to attenuate low frequency excitation and makes system flexible regarding to different weight of passengers which is applicable for A and B-Segment cars. Here, the 3 degree of freedom system is considered, dynamic equation clearly is presented, then simulated in MATLAB in order to analysis of performance of the system. The design procedure is derived so that the resonance peak of frequency–response curve shift to the left, the isolating range is increased and especially, the peak of the frequency–response curve is minimized. According to ISO standard different class of road profile as an input is applied to the system to evaluate the performance of the system. To evaluate comfort issues, we extract the RMS value of the vertical acceleration acting on the passenger's body. Then apply the band-pass filter, which takes into account the human sensitivity to acceleration. According to ISO, this weighted acceleration is lower than 0.315 m/s^2, so the ride is considered as comfortable.

Keywords: low frequency excitation, negative stiffness, seat vehicle, vibration isolation

Procedia PDF Downloads 437
1015 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters

Authors: Ravi Kiran Jaladi, Ezz I. El-Masry

Abstract:

Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.

Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)

Procedia PDF Downloads 260
1014 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 239
1013 Contact Phenomena in Medieval Business Texts

Authors: Carmela Perta

Abstract:

Among the studies flourished in the field of historical sociolinguistics, mainly in the strand devoted to English history, during its Medieval and early modern phases, multilingual texts had been analysed using theories and models coming from contact linguistics, thus applying synchronic models and approaches to the past. This is true also in the case of contact phenomena which would transcend the writing level involving the language systems implicated in contact processes to the point of perceiving a new variety. This is the case for medieval administrative-commercial texts in which, according to some Scholars, the degree of fusion of Anglo-Norman, Latin and middle English is so high a mixed code emerges, and there are recurrent patterns of mixed forms. Interesting is a collection of multilingual business writings by John Balmayn, an Englishman overseeing a large shipment in Tuscany, namely the Cantelowe accounts. These documents display various analogies with multilingual texts written in England in the same period; in fact, the writer seems to make use of the above-mentioned patterns, with Middle English, Latin, Anglo-Norman, and the newly added Italian. Applying an atomistic yet dynamic approach to the study of contact phenomena, we will investigate these documents, trying to explore the nature of the switching forms they contain from an intra-writer variation perspective. After analysing the accounts and the type of multilingualism in them, we will take stock of the assumed mixed code nature, comparing the characteristics found in this genre with modern assumptions. The aim is to evaluate the possibility to consider the switching forms as core elements of a mixed code, used as professional variety among merchant communities, or whether such texts should be analysed from a switching perspective.

Keywords: historical sociolinguistics, historical code switching, letters, medieval england

Procedia PDF Downloads 75
1012 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies

Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo

Abstract:

Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.

Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system

Procedia PDF Downloads 38
1011 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 160
1010 Multistep Thermal Degradation Kinetics: Pyrolysis of CaSO₄-Complex Obtained by Antiscaling Effect of Maleic-Anhydride Polymer

Authors: Yousef M. Al-Roomi, Kaneez Fatema Hussain

Abstract:

This work evaluates the thermal degradation kinetic parameters of CaSO₄-complex isolated after the inhibition effect of maleic-anhydride based polymer (YMR-polymers). Pyrolysis experiments were carried out at four heating rates (5, 10, 15 and 20°C/min). Several analytical model-free methods were used to determine the kinetic parameters, including Friedman, Coats and Redfern, Kissinger, Flynn-Wall-Ozawa and Kissinger-Akahira–Sunose methods. The Criado model fitting method based on real mechanism followed in thermal degradation of the complex has been applied to explain the degradation mechanism of CaSO₄-complex. In addition, a simple dynamic model was proposed over two temperature ranges for successive decomposition of CaSO₄-complex which has a combination of organic and inorganic part (adsorbed polymer + CaSO₄.2H₂O scale). The model developed enabled the assessment of pre-exponential factor (A) and apparent activation-energy (Eₐ) for both stages independently using a mathematical developed expression based on an integral solution. The unique reaction mechanism approach applied in this study showed that (Eₐ₁-160.5 kJ/mole) for organic decomposition (adsorbed polymer stage-I) has been lower than Eₐ₂-388 kJ/mole for the CaSO₄ decomposition (inorganic stage-II). Further adsorbed YMR-antiscalant not only reduced the decomposition temperature of CaSO₄-complex compared to CaSO₄-blank (CaSO₄.2H₂O scales in the absence of YMR-polymer) but also distorted the crystal lattice of the organic complex of CaSO₄ precipitates, destroying their compact and regular crystal structures observed from XRD and SEM studies.

Keywords: CaSO₄-complex, maleic-anhydride polymers, thermal degradation kinetics and mechanism, XRD and SEM studies

Procedia PDF Downloads 119
1009 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation

Authors: Jessica Cecchinelli, Amer Ali

Abstract:

The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.

Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction

Procedia PDF Downloads 424
1008 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 306
1007 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin

Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh

Abstract:

The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.

Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati

Procedia PDF Downloads 105
1006 Comparison Study of Capital Protection Risk Management Strategies: Constant Proportion Portfolio Insurance versus Volatility Target Based Investment Strategy with a Guarantee

Authors: Olga Biedova, Victoria Steblovskaya, Kai Wallbaum

Abstract:

In the current capital market environment, investors constantly face the challenge of finding a successful and stable investment mechanism. Highly volatile equity markets and extremely low bond returns bring about the demand for sophisticated yet reliable risk management strategies. Investors are looking for risk management solutions to efficiently protect their investments. This study compares a classic Constant Proportion Portfolio Insurance (CPPI) strategy to a Volatility Target portfolio insurance (VTPI). VTPI is an extension of the well-known Option Based Portfolio Insurance (OBPI) to the case where an embedded option is linked not to a pure risky asset such as e.g., S&P 500, but to a Volatility Target (VolTarget) portfolio. VolTarget strategy is a recently emerged rule-based dynamic asset allocation mechanism where the portfolio’s volatility is kept under control. As a result, a typical VTPI strategy allows higher participation rates in the market due to reduced embedded option prices. In addition, controlled volatility levels eliminate the volatility spread in option pricing, one of the frequently cited reasons for OBPI strategy fall behind CPPI. The strategies are compared within the framework of the stochastic dominance theory based on numerical simulations, rather than on the restrictive assumption of the Black-Scholes type dynamics of the underlying asset. An extended comparative quantitative analysis of performances of the above investment strategies in various market scenarios and within a range of input parameter values is presented.

Keywords: CPPI, portfolio insurance, stochastic dominance, volatility target

Procedia PDF Downloads 167
1005 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294
1004 A Protein-Wave Alignment Tool for Frequency Related Homologies Identification in Polypeptide Sequences

Authors: Victor Prevost, Solene Landerneau, Michel Duhamel, Joel Sternheimer, Olivier Gallet, Pedro Ferrandiz, Marwa Mokni

Abstract:

The search for homologous proteins is one of the ongoing challenges in biology and bioinformatics. Traditionally, a pair of proteins is thought to be homologous when they originate from the same ancestral protein. In such a case, their sequences share similarities, and advanced scientific research effort is spent to investigate this question. On this basis, we propose the Protein-Wave Alignment Tool (”P-WAT”) developed within the framework of the France Relance 2030 plan. Our work takes into consideration the mass-related wave aspect of protein biosynthesis, by associating specific frequencies to each amino acid according to its mass. Amino acids are then regrouped within their mass category. This way, our algorithm produces specific alignments in addition to those obtained with a common amino acid coding system. For this purpose, we develop the ”P-WAT” original algorithm, able to address large protein databases, with different attributes such as species, protein names, etc. that allow us to align user’s requests with a set of specific protein sequences. The primary intent of this algorithm is to achieve efficient alignments, in this specific conceptual frame, by minimizing execution costs and information loss. Our algorithm identifies sequence similarities by searching for matches of sub-sequences of different sizes, referred to as primers. Our algorithm relies on Boolean operations upon a dot plot matrix to identify primer amino acids common to both proteins which are likely to be part of a significant alignment of peptides. From those primers, dynamic programming-like traceback operations generate alignments and alignment scores based on an adjusted PAM250 matrix.

Keywords: protein, alignment, homologous, Genodic

Procedia PDF Downloads 113
1003 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters

Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng

Abstract:

Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.

Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum

Procedia PDF Downloads 162
1002 Use of the Occupational Repetitive Action Method in Different Productive Sectors: A Literature Review 2007-2018

Authors: Aanh Eduardo Dimate-Garcia, Diana Carolina Rodriguez-Romero, Edna Yuliana Gonzalez Rincon, Diana Marcela Pardo Lopez, Yessica Garibello Cubillos

Abstract:

Musculoskeletal disorders (MD) are the new epidemic of chronic diseases, are multifactorial and affect the different productive sectors. Although there are multiple instruments to evaluate the static and dynamic load, the method of repetitive occupational action (OCRA) seems to be an attractive option. Objective: It is aimed to analyze the use of the OCRA method and the prevalence of MD in workers of various productive sectors according to the literature (2007-2018). Materials and Methods: A literature review (following the PRISMA statement) of studies aimed at assessing the level of biomechanical risk (OCRA) and the prevalence of MD in the databases Scielo, Science Direct, Scopus, ProQuest, Gale, PubMed, Lilacs and Ebsco was realized; 7 studies met the selection criteria; the majority are quantitative (cross section). Results: it was evidenced (gardening and flower-growers) in this review that 79% of the conditions related to the task require physical requirements and involve repetitive movements. In addition, of the high appearance of DM in the high-low back, upper and lower extremities that are produced by the frequency of the activities carried out (footwear production). Likewise, there was evidence of 'very high risks' of developing MD (salmon industry) and a medium index (OCRA) for repetitive movements that require special care (U-Assembly line). Conclusions: the review showed the limited use of the OCRA method for the detection of MD in workers from different sectors, and this method can be used for the detection of biomechanical risk and the appearance of MD.

Keywords: checklist, cumulative trauma disorders, musculoskeletal diseases, repetitive movements

Procedia PDF Downloads 181
1001 Long-Term Durability of Roller-Compacted Concrete Pavement

Authors: Jun Hee Lee, Young Kyu Kim, Seong Jae Hong, Chamroeun Chhorn, Seung Woo Lee

Abstract:

Roller-compacted concrete pavement (RCCP), an environmental friendly pavement of which load carry capacity benefitted from both hydration and aggregate interlock from roller compacting, demonstrated a superb structural performance for a relatively small amount of water and cement content. Even though an excellent structural performance can be secured, it is required to investigate roller-compacted concrete (RCC) under environmental loading and its long-term durability under critical conditions. In order to secure long-term durability, an appropriate internal air-void structure is required for this concrete. In this study, a method for improving the long-term durability of RCCP is suggested by analyzing the internal air-void structure and corresponding durability of RCC. The method of improving the long-term durability involves measurements of air content, air voids, and air-spacing factors in RCC that experiences changes in terms of type of air-entraining agent and its usage amount. This test is conducted according to the testing criteria in ASTM C 457, 672, and KS F 2456. It was found that the freezing-thawing and scaling resistances of RCC without any chemical admixture was quite low. Interestingly, an improvement of freezing-thawing and scaling resistances was observed for RCC with appropriate the air entraining (AE) agent content; Relative dynamic elastic modulus was found to be more than 80% for those mixtures. In RCC with AE agent mixtures, large amount of air was distributed within a range of 2% to 3%, and an air void spacing factor ranging between 200 and 300 μm (close to 250 μm, recommended by PCA) was secured. The long-term durability of RCC has a direct relationship with air-void spacing factor, and thus it can only be secured by ensuring the air void spacing factor through the inclusion of the AE in the mixture.

Keywords: durability, RCCP, air spacing factor, surface scaling resistance test, freezing and thawing resistance test

Procedia PDF Downloads 253
1000 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology

Authors: Yonggu Jang, Jisong Ryu, Woosik Lee

Abstract:

The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.

Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities

Procedia PDF Downloads 62
999 Experimental Field for the Study of Soil-Atmosphere Interaction in Soft Soils

Authors: Andres Mejia-Ortiz, Catalina Lozada, German R. Santos, Rafael Angulo-Jaramillo, Bernardo Caicedo

Abstract:

The interaction between atmospheric variables and soil properties is a determining factor when evaluating the flow of water through the soil. This interaction situation directly determines the behavior of the soil and greatly influences the changes that occur in it. The atmospheric variations such as changes in the relative humidity, air temperature, wind velocity and precipitation, are the external variables that reflect a greater incidence in the changes that are generated in the subsoil, as a consequence of the water flow in descending and ascending conditions. These environmental variations have a major importance in the study of the soil because the conditions of humidity and temperature in the soil surface depend on them. In addition, these variations control the thickness of the unsaturated zone and the position of the water table with respect to the surface. However, understanding the relationship between the atmosphere and the soil is a somewhat complex aspect. This is mainly due to the difficulty involved in estimating the changes that occur in the soil from climate changes; since this is a coupled process where act processes of mass transfer and heat. In this research, an experimental field was implemented to study in-situ the interaction between the atmosphere and the soft soils of the city of Bogota, Colombia. The soil under study consists of a 60 cm layer composed of two silts of similar characteristics at the surface and a deep soft clay deposit located under the silky material. It should be noted that the vegetal layer and organic matter were removed to avoid the evapotranspiration phenomenon. Instrumentation was carried on in situ through a field disposal of many measuring devices such as soil moisture sensors, thermocouples, relative humidity sensors, wind velocity sensor, among others; which allow registering the variations of both the atmospheric variables and the properties of the soil. With the information collected through field monitoring, the water balances were made using the Hydrus-1D software to determine the flow conditions that developed in the soil during the study. Also, the moisture profile for different periods and time intervals was determined by the balance supplied by Hydrus 1D; this profile was validated by experimental measurements. As a boundary condition, the actual evaporation rate was included using the semi-empirical equations proposed by different authors. In this study, it was obtained for the rainy periods a descending flow that was governed by the infiltration capacity of the soil. On the other hand, during dry periods. An increase in the actual evaporation of the soil induces an upward flow of water, increasing suction due to the decrease in moisture content. Also, cracks were developed accelerating the evaporation process. This work concerns to the study of soil-atmosphere interaction through the experimental field and it is a very useful tool since it allows considering all the factors and parameters of the soil in its natural state and real values of the different environmental conditions.

Keywords: field monitoring, soil-atmosphere, soft soils, soil-water balance

Procedia PDF Downloads 137
998 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 337
997 An EBSD Investigation of Ti-6Al-4Nb Alloy Processed by Plan Strain Compression Test

Authors: Anna Jastrzebska, K. S. Suresh, T. Kitashima, Y. Yamabe-Mitarai, Z. Pakiela

Abstract:

Near α titanium alloys are important materials for aerospace applications, especially in high temperature applications such as jet engine. Mechanical properties of Ti alloys strongly depends on their processing route, then it is very important to understand micro-structure change by different processing. In our previous study, Nb was found to improve oxidation resistance of Ti alloys. In this study, micro-structure evolution of Ti-6Al-4Nb (wt %) alloy was investigated after plain strain compression test in hot working temperatures in the α and β phase region. High-resolution EBSD was successfully used for precise phase and texture characterization of this alloy. 1.1 kg of Ti-6Al-4Nb ingot was prepared using cold crucible levitation melting. The ingot was subsequently homogenized in 1050 deg.C for 1h followed by cooling in the air. Plate like specimens measuring 10×20×50 mm3 were cut from an ingot by electrical discharge machining (EDM). The plain strain compression test using an anvil with 10 x 35 mm in size was performed with 3 different strain rates: 0.1s-1, 1s-1and 10s-1 in 700 deg.C and 1050 deg.C to obtain 75% of deformation. The micro-structure was investigated by scanning electron microscopy (SEM) equipped with electron backscatter diffraction (EBSD) detector. The α/β phase ratio and phase morphology as well as the crystallographic texture, subgrain size, misorientation angles and misorientation gradients corresponding to each phase were determined over the middle and the edge of sample areas. The deformation mechanism in each working temperature was discussed. The evolution of texture changes with strain rate was investigated. The micro-structure obtained by plain strain compression test was heterogeneous with a wide range of grain sizes. This is because deformation and dynamic recrystallization occurred during deformation at temperature in the α and β phase. It was strongly influenced by strain rate.

Keywords: EBSD, plain strain compression test, Ti alloys

Procedia PDF Downloads 382
996 Collapse Analysis of Planar Composite Frame under Impact Loads

Authors: Lian Song, Shao-Bo Kang, Bo Yang

Abstract:

Concrete filled steel tubular (CFST) structure has been widely used in construction practices due to its superior performances under various loading conditions. However, limited studies are available when this type of structure is subjected to impact or explosive loads. Current methods in relevant design codes are not specific for preventing progressive collapse of CFST structures. Therefore, it is necessary to carry out numerical simulations on CFST structure under impact loads. In this study, finite element analyses are conducted on the mechanical behaviour of composite frames which composed of CFST columns and steel beams subject to impact loading. In the model, CFST columns are simulated using finite element software ABAQUS. The model is verified by test results of solid and hollow CFST columns under lateral impacts, and reasonably good agreement is obtained through comparisons. Thereafter, a multi-scale finite element modelling technique is developed to evaluate the behaviour of a five-storey three-span planar composite frame. Alternate path method and direct simulation method are adopted to perform the dynamic response of the frame when a supporting column is removed suddenly. In the former method, the reason for column removal is not considered and only the remaining frame is simulated, whereas in the latter, a specific impact load is applied to the frame to take account of the column failure induced by vehicle impact. Comparisons are made between these two methods in terms of displacement history and internal force redistribution, and design recommendations are provided for the design of CFST structures under impact loads.

Keywords: planar composite frame, collapse analysis, impact loading, direct simulation method, alternate path method

Procedia PDF Downloads 519
995 Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)

Authors: Ismail Elkhrachy

Abstract:

Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost.

Keywords: land use, remote sensing, change detection, satellite images, image classification

Procedia PDF Downloads 525
994 Clinical Parameters Response to Low Level Laser Versus Monochromatic Near Infrared Photo Energy in Diabetic Patient with Peripheral Neuropathy

Authors: Abeer Ahmed Abdehameed

Abstract:

Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common micro vascular complications of type 2 diabetes. Loss of sensation is thought to contribute to lake of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low level laser (LLL) and monochromatic near infrared photo energy (MIRE) on pain , cutaneous sensation, static stability and index of lower limb blood flow in diabetic with peripheral neuropathy. Methods: Forty subjects with diabetic peripheral neuropathy were recruited for study. They were divided into two groups: The ( MIRE) group that included (20) patients and (LLL) group included (20) patients. All patients in the study had been subjected to various physical assessment procedures including pain, cutaneous sensation, Doppler flow meter and static stability assessments. The baseline measurements were followed by treatment sessions that conducted twice a week for 6 successive weeks. Results: The statistical analysis of the data had revealed significant improvement of the pain in both groups, with significant improvement in cutaneous sensation and static balance in (MIRE) group compared to (LLL) group; on the other hand results showed no significant differences on lower limb blood flow in both groups. Conclusion: Low level laser and monochromatic near infrared therapy can improve painful symptoms in patients with diabetic neuropathy. On the other hand (MIRE) is useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.

Keywords: diabetic neuropathy, doppler flow meter, low level laser, monochromatic near infrared photo energy

Procedia PDF Downloads 314
993 Comparison of Various Landfill Ground Improvement Techniques for Redevelopment of Closed Landfills to Cater Transport Infrastructure

Authors: Michael D. Vinod, Hadi Khabbaz

Abstract:

Construction of infrastructure above or adjacent to landfills is becoming more common to capitalize on the limited space available within urban areas. However, development above landfills is a challenging task due to large voids, the presence of organic matter, heterogeneous nature of waste and ambiguity surrounding landfill settlement prediction. Prior to construction of infrastructure above landfills, ground improvement techniques are being employed to improve the geotechnical properties of landfill material. Although the ground improvement techniques have little impact on long term biodegradation and creep related landfill settlement, they have shown some notable short term success with a variety of techniques, including methods for verifying the level of effectiveness of ground improvement techniques. This paper provides geotechnical and landfill engineers a guideline for selection of landfill ground improvement techniques and their suitability to project-specific sites. Ground improvement methods assessed and compared in this paper include concrete injected columns (CIC), dynamic compaction, rapid impact compaction (RIC), preloading, high energy impact compaction (HEIC), vibro compaction, vibro replacement, chemical stabilization and the inclusion of geosynthetics such as geocells. For each ground improvement technique a summary of the existing theory, benefits, limitations, suitable modern ground improvement monitoring methods, the applicability of ground improvement techniques for landfills and supporting case studies are provided. The authors highlight the importance of implementing cost-effective monitoring techniques to allow observation and necessary remediation of the subsidence effects associated with long term landfill settlement. These ground improvement techniques are primarily for the purpose of construction above closed landfills to cater for transport infrastructure loading.

Keywords: closed landfills, ground improvement, monitoring, settlement, transport infrastructure

Procedia PDF Downloads 224
992 Design and Development of a Lead-Free BiFeO₃-BaTiO₃ Quenched Ceramics for High Piezoelectric Strain Performance

Authors: Muhammad Habib, Lin Tang, Guoliang Xue, Attaur Rahman, Myong-Ho Kim, Soonil Lee, Xuefan Zhou, Yan Zhang, Dou Zhang

Abstract:

Designing a high-performance, lead-free ceramic has become a cutting-edge research topic due to growing concerns about the toxic nature of lead-based materials. In this work, a convenient strategy of compositional design and domain engineering is applied to the lead-fee BiFeO₃-BaTiO₃ ceramics, which provides a flexible polarization-free-energy profile for domain switching. Here, simultaneously enhanced dynamic piezoelectric constant (d33* = 772 pm/V) and a good thermal-stability (d33* = 26% over the temperature of 20-180 ᵒC) are achieved with a high Curie temperature (TC) of 432 ᵒC. This high piezoelectric strain performance is collectively attributed to multiple effects such as thermal quenching, suppression of defect charges by donor doping, chemically induced local structure heterogeneity, and electric field-induced phase transition. Furthermore, the addition of BT content decreased octahedral tilting, reduced anisotropy for domain switching and increased tetragonality (cₜ/aₜ), providing a wider polar length for B-site cation displacement, leading to high piezoelectric strain performance. Atomic-resolution transmission electron microscopy and piezoelectric force microscopy combined with X-ray diffraction results strongly support the origin of high piezoelectricity. The high and temperature-stable piezoelectric strain response of this work is superior to those of other lead-free ceramics. The synergistic approach of composition design and the concept present here for the origin of high strain response provides a paradigm for the development of materials for high-temperature piezoelectric actuator applications.

Keywords: Piezoelectric, BiFeO3-BaTiO3, Quenching, Temperature-insensitive

Procedia PDF Downloads 83
991 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 113
990 Diversity and Inclusion in Focus: Cultivating a Sense of Belonging in Higher Education

Authors: Naziema Jappie

Abstract:

South Africa is a diverse nation but with many challenges. The fundamental changes in the political, economic and educational domains in South Africa in the late 1990s affected the South African community profoundly. In higher education, experiences of discrimination and bias are detrimental to the sense of belonging of staff and students. It is therefore important to cultivate an appreciation of diversity and inclusion. To bridge common understandings with the reality of racial inequality, we must understand the ways in which senior and executive leadership at universities think about social justice issues relating to diversity and inclusion and contextualize these within the current post-democracy landscape. The position and status of social justice issues and initiatives in South African higher education is a slow process. The focus is to highlight how and to what extent initiatives or practices around campus diversity and inclusion have been considered and made part of the mainstream intellectual and academic conversations in South Africa. This involves an examination of the social and epistemological conditions of possibility for meaningful research and curriculum practices, staff and student recruitment, and student access and success in addressing the challenges posed by social diversity on campuses. Methodology: In this study, university senior and executive leadership were interviewed about their perceptions and advancement of social justice and examine the buffering effects of diverse and inclusive peer interactions and institutional commitment on the relationship between discrimination–bias and sense of belonging for staff and students at the institutions. The paper further explores diversity and inclusion initiatives at the three institutions using a Critical Race Theory approach in conjunction with a literature review on social justice with a special focus on diversity and inclusion. Findings: This paper draws on research findings that demonstrate the need to address social justice issues of diversity and inclusion in the SA higher education context. The reason for this is so that university leaders can live out their experiences and values as they work to transform students into being accountable and responsible. Documents were selected for review with the intent of illustrating how diversity and inclusion work being done across an institution can shape the experiences of previously disadvantaged persons at these institutions. The research has highlighted the need for institutional leaders to embody their own mission and vision as they frame social justice issues for the campus community. Finally, the paper provides recommendations to institutions for strengthening high-level diversity and inclusion programs/initiatives among staff, students and administrators. The conclusion stresses the importance of addressing the historical and current policies and practices that either facilitate or negate the goals of social justice, encouraging these privileged institutions to create internal committees or task forces that focus on racial and ethnic disparities in the institution.

Keywords: diversity, higher education, inclusion, social justice

Procedia PDF Downloads 121
989 Symbolic Status of Architectural Identity: Example of Famagusta Walled City

Authors: Rafooneh Mokhtarshahi Sani

Abstract:

This study explores how the residents of a conserved urban area have used goods and ideas as resources to maintain an enviable architectural identity. Whereas conserved urban quarters are seen as role model for maintaining architectural identity, the article describes how their residents try to give a contemporary modern image to their homes. It is argued that despite the efforts of authorities and decision makers to keep and preserve the traditional architectural identity in conserved urban areas, people have already moved on and have adjusted their homes with their preferred architectural taste. Being through such conflict of interests, have put the future of architectural identity in such places at risk. The thesis is that, on the one hand, such struggle over a desirable symbolic status in identity formation is taking place, and, on the other, it is continuously widening the gap between the real and ideal identity in the built environment. The study then analytically connects the concept of symbolic status to current identity debates. As an empirical research, this study uses systematic social and physical observation methods to describe and categorize the characteristics of settlements in Walled City of Famagusta, which symbolically represent the modern houses. The Walled City is a cultural heritage site, which most of its urban context has been conserved. Traditional houses in this area demonstrate the identity of North Cyprus architecture. The conserved residential buildings, however, either has been abandoned or went through changes by their users to present the ideal image of contemporary life. In the concluding section, the article discusses the differences between the symbolic status of people and authorities in defining a culturally valuable contemporary home. And raises the question of whether we can talk at all about architectural identity in terms of conserving the traditional style, and how we may do so on the basis of dynamic nature of identity and the necessity of its acceptance by the users.

Keywords: symbolic status, architectural identity, conservation, facades, Famagusta walled city

Procedia PDF Downloads 356
988 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 91