Search results for: scanning time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19310

Search results for: scanning time

13280 Promotional Effects of Zn in Cu-Zn/Core-Shell Al-MCM-41 for Selective Catalytic Reduction of NO with NH3: Acidic Properties, NOx Adsorption Properties, and Nature of Copper

Authors: Thidarat Imyen, Paisan Kongkachuichay

Abstract:

Cu-Zn/core-shell Al-MCM-41 catalyst with various copper species, prepared by a combination of three methods—substitution, ion-exchange, and impregnation, was studied for the selective catalytic reduction (SCR) of NO with NH3 at 300 °C for 150 min. In order to investigate the effects of Zn introduction on the nature of the catalyst, Cu/core-shell Al-MCM-41 and Zn/core-shell Al-MCM-41 catalysts were also studied. The roles of Zn promoter in the acidity and the NOx adsorption properties of the catalysts were investigated by in situ Fourier transform infrared spectroscopy (FTIR) of NH3 and NOx adsorption, and temperature-programmed desorption (TPD) of NH3 and NOx. The results demonstrated that the acidity of the catalyst was enhanced by the Zn introduction, as exchanged Zn(II) cations loosely bonded with Al-O-Si framework could create Brønsted acid sites by interacting with OH groups. Moreover, Zn species also provided the additional sites for NO adsorption in the form of nitrite (NO2–) and nitrate (NO3–) species, which are the key intermediates for SCR reaction. In addition, the effect of Zn on the nature of copper was studied by in situ FTIR of CO adsorption and in situ X-ray adsorption near edge structure (XANES). It was found that Zn species hindered the reduction of Cu(II) to Cu(0), resulting in higher Cu(I) species in the Zn promoted catalyst. The Cu-Zn/core-shell Al-MCM-41 exhibited higher catalytic activity compared with that of the Cu/core-shell Al-MCM-41 for the whole reaction time, as it possesses the highest amount of Cu(I) sites, which are responsible for SCR catalytic activity. The Cu-Zn/core-shell Al-MCM-41 catalyst also reached the maximum NO conversion of 100% with the average NO conversion of 76 %. The catalytic performance of the catalyst was further improved by using Zn promoter in the form of ZnO instead of reduced Zn species. The Cu-ZnO/core-shell Al-MCM-41 catalyst showed better catalytic performance with longer working reaction time, and achieved the average NO conversion of 81%.

Keywords: Al-MCM-41, copper, nitrogen oxide, selective catalytic reduction, zinc

Procedia PDF Downloads 277
13279 Study of Ageing in the Marine Environment of Bonded Composite Structures by Ultrasonic Guided Waves. Comparison of the Case of a Conventional Carbon-epoxy Composite and a Recyclable Resin-Based Composite

Authors: Hamza Hafidi Alaoui, Damien Leduc, Mounsif Ech Cherif El Kettani

Abstract:

This study is dedicated to the evaluation of the ageing of turbine blades in sea conditions, based on ultrasonic Non Destructive Testing (NDT) methods. This study is being developed within the framework of the European Interreg TIGER project. The Tidal Stream Industry Energiser Project, known as TIGER, is the biggest ever Interreg project driving collaboration and cost reductionthrough tidal turbine installations in the UK and France. The TIGER project will drive the growth of tidal stream energy to become a greater part of the energy mix, with significant benefits for coastal communities. In the bay of Paimpol-Bréhat (Brittany), different samples of composite material and bonded composite/composite structures have been immersed at the same time near a turbine. The studied samples are either conventional carbon-epoxy composite samples or composite samples based on a recyclable resin (called recyclamine). One of the objectives of the study is to compare the ageing of the two types of structure. A sample of each structure is picked up every 3 to 6 months and analyzed using ultrasonic guided waves and bulk waves and compared to reference samples. In order to classify the damage level as a function of time spent under the sea, the measure have been compared to a rheological model based on the Finite Elements Method (FEM). Ageing of the composite material, as well as that of the adhesive, is identified. The aim is to improve the quality of the turbine blade structure in terms of longevity and reduced maintenance needs.

Keywords: non-destructive testing, ultrasound, composites, guides waves

Procedia PDF Downloads 204
13278 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids

Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone

Abstract:

Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.

Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain

Procedia PDF Downloads 444
13277 Numerical Assessment of Fire Characteristics with Bodies Engulfed in Hydrocarbon Pool Fire

Authors: Siva Kumar Bathina, Sudheer Siddapureddy

Abstract:

Fires accident becomes even worse when the hazardous equipment like reactors or radioactive waste packages are engulfed in fire. In this work, large-eddy numerical fire simulations are performed using fire dynamic simulator to predict the thermal behavior of such bodies engulfed in hydrocarbon pool fires. A radiatively dominated 0.3 m circular burner with n-heptane as the fuel is considered in this work. The fire numerical simulation results without anybody inside the fire are validated with the reported experimental data. The comparison is in good agreement for different flame properties like predicted mass burning rate, flame height, time-averaged center-line temperature, time-averaged center-line velocity, puffing frequency, the irradiance at the surroundings, and the radiative heat feedback to the pool surface. Cask of different sizes is simulated with SS304L material. The results are independent of the material of the cask simulated as the adiabatic surface temperature concept is employed in this study. It is observed that the mass burning rate increases with the blockage ratio (3% ≤ B ≤ 32%). However, the change in this increment is reduced at higher blockage ratios (B > 14%). This is because the radiative heat feedback to the fuel surface is not only from the flame but also from the cask volume. As B increases, the volume of the cask increases and thereby increases the radiative contribution to the fuel surface. The radiative heat feedback in the case of the cask engulfed in the fire is increased by 2.5% to 31% compared to the fire without cask.

Keywords: adiabatic surface temperature, fire accidents, fire dynamic simulator, radiative heat feedback

Procedia PDF Downloads 113
13276 Charge Trapping on a Single-wall Carbon Nanotube Thin-film Transistor with Several Electrode Metals for Memory Function Mimicking

Authors: Ameni Mahmoudi, Manel Troudi, Paolo Bondavalli, Nabil Sghaier

Abstract:

In this study, the charge storage on thin-film SWCNT transistors was investigated, and C-V hysteresis tests showed that interface charge trapping effects predominate the memory window. Two electrode materials were utilized to demonstrate that selecting the appropriate metal electrode clearly improves the conductivity and, consequently, the SWCNT thin-film’s memory effect. Because their work function is similar to that of thin-film carbon nanotubes, Ti contacts produce higher charge confinement and show greater charge storage than Pd contacts. For Pd-contact CNTFETs and CNTFETs with Ti electrodes, a sizable clockwise hysteresis window was seen in the dual sweep circle with a threshold voltage shift of V11.52V and V9.7V, respectively. The SWCNT thin-film based transistor is expected to have significant trapping and detrapping charges because of the large C-V hysteresis. We have found that the predicted stored charge density for CNTFETs with Ti contacts is approximately 4.01×10-2C.m-2, which is nearly twice as high as the charge density of the device with Pd contacts. We have shown that the amount of trapped charges can be changed by sweeping the range or Vgs rate. We also looked into the variation in the flat band voltage (V FB) vs. time in order to determine the carrier retention period in CNTFETs with Ti and Pd electrodes. The outcome shows that memorizing trapped charges is about 300 seconds, which is a crucial finding for memory function mimicking.

Keywords: charge storage, thin-film SWCNT based transistors, C-V hysteresis, memory effect, trapping and detrapping charges, stored charge density, the carrier retention time

Procedia PDF Downloads 61
13275 Mechanical Activation of a Waste Material Used as Cement Replacement in Soft Soil Stabilisation

Authors: Hassnen M. Jafer, W. Atherton, F. Ruddock, E. Loffil

Abstract:

Waste materials or sometimes called by-product materials have been increasingly used as construction material to reduce the usage of cement in different construction projects. In the field of soil stabilisation, waste materials such as pulverised fuel ash (PFA), biomass fly ash (BFA), sewage sludge ash (SSA), etc., have been used since 1960s in last century. In this study, a particular type of a waste material (WM) was used in soft soil stabilisation as a cement replacement, as well as, the effect of mechanical activation, using grinding, on the performance of this WM was also investigated. The WM used in this study is a by-product resulted from the incineration processes between 1000 and 1200oc in domestic power generation plant using a fluidized bed combustion system. The stabilised soil in this study was an intermediate plasticity silty clayey soil with medium organic matter content. The experimental works were conducted first to find the optimum content of WM by carrying out Atterberg limits and unconfined compressive strength (UCS) tests on soil samples contained (0, 3, 6, 9, 12, and 15%) of WM by the dry weight of soil. The UCS test was carried out on specimens provided to different curing periods (zero, 7, 14, and 28 days). Moreover, the optimum percentage of the WM was subject to different periods of grinding (10, 20, 30, 40mins) using mortar and pestle grinder to find the effect of grinding and its optimum time by conducting UCS test. The results indicated that the WM used in this study improved the physical properties of the soft soil where the index of plasticity (IP) was decreased significantly from 21 to 13.10 with 15% of WM. Meanwhile, the results of UCS test indicated that 12% of WM was the optimum and this percentage developed the UCS value from 202kPa to 700kPa for 28 days cured samples. Along with the time of grinding, the results revealed that 10 minutes of grinding was the best for mechanical activation for the WM used in this study.

Keywords: soft soil stabilisation, waste materials, grinding, and unconfined compressive strength

Procedia PDF Downloads 261
13274 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors

Authors: Saeed Vahedikamal, Ian Hepburn

Abstract:

Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.

Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID

Procedia PDF Downloads 75
13273 Exploring the Role of Building Information Modeling for Delivering Successful Construction Projects

Authors: Muhammad Abu Bakar Tariq

Abstract:

Construction industry plays a crucial role in the progress of societies and economies. Furthermore, construction projects have social as well as economic implications, thus, their success/failure have wider impacts. However, the industry is lagging behind in terms of efficiency and productivity. Building Information Modeling (BIM) is recognized as a revolutionary development in Architecture, Engineering and Construction (AEC) industry. There are numerous interest groups around the world providing definitions of BIM, proponents describing its advantages and opponents identifying challenges/barriers regarding adoption of BIM. This research is aimed at to determine what actually BIM is, along with its potential role in delivering successful construction projects. The methodology is critical analysis of secondary data sources i.e. information present in public domain, which include peer reviewed journal articles, industry and government reports, conference papers, books, case studies etc. It is discovered that clash detection and visualization are two major advantages of BIM. Clash detection option identifies clashes among structural, architectural and MEP designs before construction actually commences, which subsequently saves time as well as cost and ensures quality during execution phase of a project. Visualization is a powerful tool that facilitates in rapid decision-making in addition to communication and coordination among stakeholders throughout project’s life cycle. By eliminating inconsistencies that consume time besides cost during actual construction, improving collaboration among stakeholders throughout project’s life cycle, BIM can play a positive role to achieve efficiency and productivity that consequently deliver successful construction projects.

Keywords: building information modeling, clash detection, construction project success, visualization

Procedia PDF Downloads 239
13272 Regret-Regression for Multi-Armed Bandit Problem

Authors: Deyadeen Ali Alshibani

Abstract:

In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.

Keywords: optimal, bandit problem, optimization, dynamic programming

Procedia PDF Downloads 435
13271 Potency Interaction using Simvastatin and Herbs Cholesterol Lowering Agent, Prevention of Unwanted Effect in Combination Hyperlipidemia Therapy

Authors: Agung A. Ginanjar, Lilitasari, Indra Prasetya, Rizal R. Hanif, Yusrina Rismandini, Atina Hussaana, Nurita P. Sari

Abstract:

Hyperlipidemia is an increase of lipids and cholesterol in the blood that causes the formation of atherosklerosis. The recent pharmacological therapy nowadays is statin. Many Indonesian people use of medicinal plants. There are several medical plants that people always use to cure hyperlipidemia such as bulbs onion sabrang, areca nuts, and seed of fenugreek. Most people often use a combination therapy of conventional medicine and herbs to achieve the desired therapeutic effect of combination therapy. The use of combination therapy might cause the interaction of pharmacodynamic from those medicines so that it influences the pharmacological effect of one of medicine. The aim of this study is to know the interaction of simvastatin and a cholesterol-lowering herb seen in rats pharmacodynamic simvastatin phase. This research used post-test only controlled group design. Analysis of statistical data normality and homogenity were tested by Kolmogorov Smirnov. The ANOVA test is used when the data is obtained homogeneous but if it is found that the data are not homogeneous then kruskal-wallis test is used. Normal (63.196 mg/dl), negative (70.604 mg/dl), positive (62.512 mg/dl), areca nuts (56.564 mg/dl), fenugreek seed (47.538 ,g/dl), onion sabrang (62.312 mg/dl). The results prove that the combination of herbs and simvastatin did not have a significant difference (P>0,05). The conclusion of this study is that the combination of simvastatin and a cholesterol-lowering herb can cause some pharmacodynamic interactions such as a synergistic effect, antagonist, and a powerful additive, so that combination therapy is not more effective than single simvastatin therapy. The use of the combination therapy is not given in the same time. It would be better if there are some period of time when the combination therapy is applied.

Keywords: onion bulb sabrang, areca nuts, seed of fenugreek, interaction medicine, hyperlipidemia

Procedia PDF Downloads 508
13270 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger

Authors: Mounir Baccar, Imen Jmal

Abstract:

Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.

Keywords: heat transfer enhancement, front solidification, PCM, natural convection

Procedia PDF Downloads 171
13269 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis

Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic

Abstract:

What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.

Keywords: political tendency, prediction, sentiment analysis, Twitter

Procedia PDF Downloads 216
13268 Heat Transfer and Diffusion Modelling

Authors: R. Whalley

Abstract:

The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.

Keywords: heat, transfer, diffusion, modelling, computation

Procedia PDF Downloads 536
13267 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 52
13266 The Fefe Indices: The Direction of Donal Trump’s Tweets Effect on the Stock Market

Authors: Sergio Andres Rojas, Julian Benavides Franco, Juan Tomas Sayago

Abstract:

An increasing amount of research demonstrates how market mood affects financial markets, but their primary goal is to demonstrate how Trump's tweets impacted US interest rate volatility. Following that lead, this work evaluates the effect that Trump's tweets had during his presidency on local and international stock markets, considering not just volatility but the direction of the movement. Three indexes for Trump's tweets were created relating his activity with movements in the S&P500 using natural language analysis and machine learning algorithms. The indexes consider Trump's tweet activity and the positive or negative market sentiment they might inspire. The first explores the relationship between tweets generating negative movements in the S&P500; the second explores positive movements, while the third explores the difference between up and down movements. A pseudo-investment strategy using the indexes produced statistically significant above-average abnormal returns. The findings also showed that the pseudo strategy generated a higher return in the local market if applied to intraday data. However, only a negative market sentiment caused this effect on daily data. These results suggest that the market reacted primarily to a negative idea reflected in the negative index. In the international market, it is not possible to identify a pervasive effect. A rolling window regression model was also performed. The result shows that the impact on the local and international markets is heterogeneous, time-changing, and differentiated for the market sentiment. However, the negative sentiment was more prone to have a significant correlation most of the time.

Keywords: market sentiment, Twitter market sentiment, machine learning, natural dialect analysis

Procedia PDF Downloads 49
13265 Eco-Parcel As a Semi-Qualitative Approach to Support Environmental Impacts Assessments in Nature-Based Tourism Destinations

Authors: Halima Kilungu, Pantaleo, K. T. Munishi

Abstract:

Climate and land-cover change affect nature-based tourism (NBT) due to its attractions' close connection to natural environments and climate. Thus, knowledge of how each attraction reacts to the changing environments and devising simple yet science based approaches to respond to these changes from a tourism perspective in space and time is timely. Nevertheless, no specific approaches exist to address the knowledge gap. The eco-parcel approach is devised to address the gap and operationalized in Serengeti and Kilimanjaro National Parks: the most climate-sensitive NBT destinations in Africa. The approach is partly descriptive and has three simple steps: (1) to identify and define tourist attractions (i.e. biotic and abiotic attractions). This creates an important database of the most poorly kept information on attractions' types in NBT destinations. (2) To create a spatial and temporal link of each attraction and describe its characteristic environments (e.g. vegetation, soil, water and rock outcrops). This is the most limited attractions' information yet important as a proxy of changes in attractions. (3) To assess the importance of individual attractions for tourism based on tourists' preferences. This information enables an accurate assessment of the value of individual attractions for tourism. The importance of the eco-parcel approach is that it describes how each attraction emerges from and is connected to specific environments, which define its attractiveness in space and time. This information allows accurate assessment of the likely losses or gains of individual attractions when climate or environment changes in specific destinations and equips tourism stakeholders with informed responses.

Keywords: climate change, environmental change, nature-based tourism, Serengeti National Park, Kilimanjaro National Park

Procedia PDF Downloads 105
13264 A Case Study Using Sounds Write and The Writing Revolution to Support Students with Literacy Difficulties

Authors: Emilie Zimet

Abstract:

During our department meetings for teachers of children with learning disabilities and difficulties, we often discuss the best practices for supporting students who come to school with literacy difficulties. After completing Sounds Write and Writing Revolution courses, it seems there is a possibility to link approaches and still maintain fidelity to a program and provide individualised instruction to support students with such difficulties and disabilities. In this case study, the researcher has been focussing on how best to use the knowledge acquired to provide quality intervention that targets the varied areas of challenge that students require support in. Students present to school with a variety of co-occurring reading and writing deficits and with complementary approaches, such as The Writing Revolution and Sounds Write, it is possible to support students to improve their fundamental skills in these key areas. Over the next twelve weeks, the researcher will collect data on current students with whom this approach will be trialled and then compare growth with students from last year who received support using Sounds-Write only. Maintaining fidelity may be a potential challenge as each approach has been tested in a specific format for best results. The aim of this study is to determine if approaches can be combined, so the implementation will need to incorporate elements of both reading (from Sounds Write) and writing (from The Writing Revolution). A further challenge is the time length of each session (25 minutes), so the researcher will need to be creative in the use of time to ensure both writing and reading are targeted while ensuring the programs are implemented. The implementation will be documented using student work samples and planning documents. This work will include a display of findings using student learning samples to demonstrate the importance of co-targeting the reading and writing challenges students come to school with.

Keywords: literacy difficulties, intervention, individual differences, methods of provision

Procedia PDF Downloads 31
13263 For Post-traumatic Stress Disorder Counselors in China, the United States, and around the Globe, Cultural Beliefs Offer Challenges and Opportunities

Authors: Anne Giles

Abstract:

Trauma is generally defined as an experience, or multiple experiences, overwhelming a person's ability to cope. Over time, many people recover from the neurobiological, physical, and emotional effects of trauma on their own. For some people, however, troubling symptoms develop over time that can result in distress and disability. This cluster of symptoms is classified as Post-traumatic Stress Disorder (PTSD). People who meet the criteria for PTSD and other trauma-related disorder diagnoses often hold a set of understandable but unfounded beliefs about traumatic events that cause undue suffering. Becoming aware of unhelpful beliefs—termed "cognitive distortions"—and challenging them is the realm of Cognitive Behavior Therapy (CBT). A form of CBT found by researchers to be especially effective for PTSD is Cognitive Processing Therapy (CPT). Through the compassionate use of CPT, people identify, examine, challenge, and relinquish unhelpful beliefs, thereby reducing symptoms and suffering. Widely-held cultural beliefs can interfere with the progress of recovery from trauma-related disorders. Although highly revered, largely unquestioned, and often stabilizing, cultural beliefs can be founded in simplistic, dichotomous thinking, i.e., things are all right, or all wrong, all good, or all bad. The reality, however, is nuanced and complex. After studying examples of cultural beliefs from China and the United States and how these might interfere with trauma recovery, trauma counselors can help clients derive criteria for preserving helpful beliefs, discover, examine, and jettison unhelpful beliefs, reduce trauma symptoms, and live their lives more freely and fully.

Keywords: cognitive processing therapy (CPT), cultural beliefs, post-traumatic stress disorder (PTSD), trauma recovery

Procedia PDF Downloads 223
13262 Sol-Gel Derived Yttria-Stabilized Zirconia Nanoparticles for Dental Applications: Synthesis and Characterization

Authors: Anastasia Beketova, Emmanouil-George C. Tzanakakis, Ioannis G. Tzoutzas, Eleana Kontonasaki

Abstract:

In restorative dentistry, yttria-stabilized zirconia (YSZ) nanoparticles can be applied as fillers to improve the mechanical properties of various resin-based materials. Using sol-gel based synthesis as simple and cost-effective method, nano-sized YSZ particles with high purity can be produced. The aim of this study was to synthesize YSZ nanoparticles by the Pechini sol-gel method at different temperatures and to investigate their composition, structure, and morphology. YSZ nanopowders were synthesized by the sol-gel method using zirconium oxychloride octahydrate (ZrOCl₂.8H₂O) and yttrium nitrate hexahydrate (Y(NO₃)₃.6H₂O) as precursors with the addition of acid chelating agents to control hydrolysis and gelation reactions. The obtained powders underwent TG_DTA analysis and were sintered at three different temperatures: 800, 1000, and 1200°C for 2 hours. Their composition and morphology were investigated by Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction Analysis (XRD), Scanning Electron Microscopy with associated with Energy Dispersive X-ray analyzer (SEM-EDX), Transmission Electron Microscopy (TEM) methods, and Dynamic Light Scattering (DLS). FTIR and XRD analysis showed the presence of pure tetragonal phase in the composition of nanopowders. By increasing the calcination temperature, the crystallinity of materials increased, reaching 47.2 nm for the YSZ1200 specimens. SEM analysis at high magnifications and DLS analysis showed submicron-sized particles with good dispersion and low agglomeration, which increased in size as the sintering temperature was elevated. From the TEM images of the YSZ1000 specimen, it can be seen that zirconia nanoparticles are uniform in size and shape and attain an average particle size of about 50 nm. The electron diffraction patterns clearly revealed ring patterns of polycrystalline tetragonal zirconia phase. Pure YSZ nanopowders have been successfully synthesized by the sol-gel method at different temperatures. Their size is small, and uniform, allowing their incorporation of dental luting resin cements to improve their mechanical properties and possibly enhance the bond strength of demanding dental ceramics such as zirconia to the tooth structure. This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme 'Human Resources Development, Education and Lifelong Learning 2014- 2020' in the context of the project 'Development of zirconia adhesion cements with stabilized zirconia nanoparticles: physicochemical properties and bond strength under aging conditions' (MIS 5047876).

Keywords: dental cements, nanoparticles, sol-gel, yttria-stabilized zirconia, YSZ

Procedia PDF Downloads 123
13261 A Comparison of Implant Stability between Implant Placed without Bone Graft versus with Bone Graft Using Guided Bone Regeneration (GBR) Technique: A Resonance Frequency Analysis

Authors: R. Janyaphadungpong, A. Pimkhaokham

Abstract:

This prospective clinical study determined the insertion torque (IT) value and monitored the changes in implant stability quotient (ISQ) values during the 12 weeks healing period from implant placement without bone graft (control group) and with bone graft using the guided bone regeneration (GBR) technique (study group). The relationship between the IT and ISQ values of the implants was also assessed. The control and study groups each consisted of 6 patients with 8 implants per group. The ASTRA TECH Implant System™ EV 4.2 mm in diameter was placed in the posterior mandibular region. In the control group, implants were placed in bone without bone graft, whereas in the study group implants were placed simultaneously with the GBR technique at favorable bone defect. IT (Ncm) of each implant was recorded when fully inserted. ISQ values were obtained from the Osstell® ISQ at the time of implant placement, and at 2, 4, 8, and 12 weeks. No difference in IT was found between groups (P = 0.320). The ISQ values in the control group were significantly higher than in the study group at the time of implant placement and at 4 weeks. There was no significant association between IT and ISQ values either at baseline or after the 12 weeks. At 12 weeks of healing, the control and study groups displayed different trends. Mean ISQ values for the control group decreased over the first 2 weeks and then started to increase. ISQ value increases were statistically significant at 8 weeks and later, whereas mean ISQ values in the study group decreased over the first 4 weeks and then started to increase, with statistical significance after 12 weeks. At 12 weeks, all implants achieved osseointegration with mean ISQ values over the threshold value (ISQ>70). These results indicated that implants, in which guided bone regeneration technique was performed during implant placement for treating favorable bone defects, were as predictable as implants placed without bone graft. However, loading in implants placed with the GBR technique for correcting favorable bone defects should be performed after 12 weeks of healing to ensure implant stability and osseointegration.

Keywords: dental implant, favorable bone defect, guided bone regeneration technique, implant stability

Procedia PDF Downloads 283
13260 Structural Health Monitoring of the 9-Story Torre Central Building Using Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

The Torre Central building is a 9-story shear wall structure located in Santiago, Chile, and has been instrumented since 2009. Events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded, and thus the building can serve as a full-scale benchmark to evaluate the structural health monitoring method developed. The first part of this article presents an analysis of inter-story drifts, and of changes in the first system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) as baseline indicators of the occurrence of damage. During 2010 Chile earthquake the system frequencies were detected decreasing approximately 24% in the EW and 27% in NS motions. Near the end of shaking, an increase of about 17% in the EW motion was detected. The structural health monitoring (SHM) method based on changes in wave traveling time (wave method) within a layered shear beam model of structure is presented in the second part of this article. If structural damage occurs the velocity of wave propagated through the structure changes. The wave method measures the velocities of shear wave propagation from the impulse responses generated by recorded data at various locations inside the building. Our analysis and results show that the detected changes in wave velocities are consistent with the observed damages. On this basis, the wave method is proven for actual implementation in structural health monitoring systems.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response, layered shear beam, structural health monitoring, Torre Central building, wave method, wave travel time

Procedia PDF Downloads 350
13259 Development of an Integrated Criminogenic Intervention Programme for High Risk Offenders

Authors: Yunfan Jiang

Abstract:

In response to an identified gap in available treatment programmes for high-risk offenders with multiple criminogenic needs and guided by emerging literature in the field of correctional rehabilitation, Singapore Prison Service (SPS) developed the Integrated Criminogenic Programme (ICP) in 2012. This evidence-informed psychological programme was designed to address all seven dynamic criminogenic needs (from the Central 8) of high-risk offenders by applying concepts from rehabilitation and psychological theories such as Risk-Need-Responsivity, Good Lives Model, narrative identity, and motivational interviewing. This programme also encompasses a 6-month community maintenance component for the purpose of providing structured step-down support in the aftercare setting. These sessions provide participants the opportunity for knowledge reinforcement and application of skills attained in-care. A quantitative evaluation of the ICP showed that the intervention group had statistically significant improvements across time in most self-report measures of criminal attitudes, substance use attitudes, and psychosocial functioning. This was congruent with qualitative data from participants saying that the ICP had the most impact on their criminal thinking patterns and management of behaviours in high-risk situations. Results from the comparison group showed no difference in their criminal attitudes, even though they reported statistically significant improvements across time in their substance use attitudes and some self-report measures of psychosocial functioning. The programme’s efficacy was also apparent in the lower rates of recidivism and relapse within 12 months for the intervention group. The management of staff issues arising from the development and implementation of an innovative high-intensity psychological programme such as the ICP will also be discussed.

Keywords: evaluation, forensic psychology, intervention programme, offender rehabilitation

Procedia PDF Downloads 564
13258 Investigation of the Effects of the Whey Addition on the Biogas Production of a Reactor Using Cattle Manure for Biogas Production

Authors: Behnam Mahdiyan Nasl

Abstract:

In a lab-scale research, the effects of feeding whey into the biogas system and how to solve the probable problems arising were analysed. In the study a semi-continuous glass reactor, having a total capacity of 13 liters and having a working capacity of 10 liters, was placed in an incubator, and the temperature was tried to be held at 38 °C. At first, the reactor was operated by adding 5 liters of animal manure and water with a ratio of 1/1. By passing time, the production rate of the gas reduced intensively that on the fourth day there was no production of gas and the system stopped working. In this condition, the pH was adjusted and by adding NaOH, it was increased from 5.4 to 7. On 48th day, the first gas measurement was done and an amount of 12.07 % of CH₄ was detected. After making buffer in the ambient, the number of bacteria existing in the cattle’s manure and contributing to the gas production was thought to be not adequate, and up to 20 % of its volume 2 liters of mud was added to the reactor. 7 days after adding the anaerobic mud, second gas measurement was carried out, and biogas including 43 % CH₄ was obtained. From the 61st day of the study, the cheese whey with the animal manure was started to be added with an amount of 40 mL per day. However, by passing time, the raising of the microorganisms existed in the whey (especially Ni and Co), the percent of methane in the biogas decreased. In fact, 2 weeks after adding PAS, the gas measurement was done and 36,97 % CH₄ was detected. 0,06 mL Ni-Co (to gain a concentration of 0.05 mg/L in the reactor’s mixture) solution was added to the system for 15 days. To find out the effect of the solution on archaea, 7 days after stopping addition of the solution, methane gas was found to have a 9,03 % increase and reach 46 %. Lastly, the effects of adding molasses to the reactor were investigated. The effects of its activity on the bacteria was analysed by adding 4 grams of it to the system. After adding molasses in 10 days, according to the last measurement, the amount of methane gas reached up to 49%.

Keywords: biogas, cheese whey, cattle manure, energy

Procedia PDF Downloads 314
13257 Aerobic Training Combined with Nutritional Guidance as an Effective Strategy for Improving Aerobic Fitness and Reducing BMI in Inactive Adults

Authors: Leif Inge Tjelta, Gerd Lise Nordbotten, Cathrine Nyhus Hagum, Merete Hagen Helland

Abstract:

Overweight and obesity can lead to numerous health problems, and inactive people are more often overweight and obese compared to physically active people. Even a moderate weight loss can improve cardiovascular and endocrine disease risk factors. The aim of the study was to examine to what extent overweight and obese adults starting up with two weekly intensive running sessions had an increase in aerobic capacity, reduction in BMI and waist circumference and changes in body composition after 33 weeks of training. An additional aim was to see if there were differences between participants who, in addition to training, also received lifestyle modification education, including practical cooking (nutritional guidance and training group (NTG =32)) compared to those who were not given any nutritional guidance (training group (TG=40)). 72 participants (49 women), mean age of 46.1 ( ± 10.4) were included. Inclusion Criteria: Previous untrained and inactive adults in all age groups, BMI ≥ 25, desire to become fitter and reduce their BMI. The two weekly supervised training sessions consisted of 10 min warm up followed by 20 to 21 min effective interval running where the participants’ heart rate were between 82 and 92% of hearth rate maximum. The sessions were completed with ten minutes whole body strength training. Measures of BMI, waist circumference (WC) and 3000m running time were performed at the start of the project (T1), after 15 weeks (T2) and at the end of the project (T3). Measurements of fat percentage, muscle mass, and visceral fat were performed at T1 and T3. Twelve participants (9 women) from both groups, who all scored around average on the 3000 m pre-test, were chosen to do a VO₂max test at T1 and T3. The NTG were given ten theoretical sessions (80 minutes each) and eight practical cooking sessions (140 minutes each). There was a significant reduction in bout groups for WC and BMI from T1 to T2. There was not found any further reduction from T2 to T3. Although not significant, NTG reduced their WC more than TG. For both groups, the percentage reduction in WC was similar to the reduction in BMI. There was a decrease in fat percentage in both groups from pre-test to post-test, whereas, for muscle mass, a small, but insignificant increase was observed for both groups. There was a decrease in 3000m running time for both groups from T1 to T2 as well as from T2 to T3. The difference between T2 and T3 was not statistically significant. The 12 participants who tested VO₂max had an increase of 2.86 ( ± 3.84) mlkg⁻¹ min⁻¹ in VO₂max and 3:02 min (± 2:01 min) reduction in running time over 3000 m from T1 until T3. There was a strong, negative correlation between the two variables. The study shows that two intensive running session in 33 weeks can increase aerobic fitness and reduce BMI, WC and fat percent in inactive adults. Cost guidance in addition to training will give additional effect.

Keywords: interval training, nutritional guidance, fitness, BMI

Procedia PDF Downloads 129
13256 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments

Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi

Abstract:

Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.

Keywords: hand, dexterity test, peg movement time, performance consistency

Procedia PDF Downloads 123
13255 Going Global by Going Local-How Website Localization and Translation Can Break the Internet Language Barrier and Contribute to Globalization

Authors: Hela Fathallah

Abstract:

With 6,500 spoken languages all over the world but 80 percent of online content available only in 10 languages – English, Chinese, Spanish, Japanese, Arabic, Portuguese, German, French, Russian, and Korean – language represents a barrier to the universal access to knowledge, information and services that the internet wants to provide. Translation and its related fields of localization, interpreting, globalization, and internationalization, remove that barrier for billions of people worldwide, unlocking new markets for technology companies, mobile device makers, service providers and language vendors as well. This paper gathers different surveys conducted in different regions of the world that demonstrate a growing demand for consumption of web content with distinctive values and in languages others than the aforementioned ones. It also adds new insights to the contribution of translation in languages preservation. The idea that English is the language of internet and that, in a globalized world, everyone should learn English to cope with new technologies is no longer true. This idea has reached its limits. It collides with cultural diversity and differences around the world and generates an accelerated rate of languages extinction. Studies prove that internet exacerbates this rate and web giants such as Facebook or Google are, today, facing the impact of such a misconception of globalization. For internet and dot-com companies, localization is the solution; they are spending a significant amount of time to understand what people want and to figure out how to provide it. They are committed to making their content accessible, if not in all the languages spoken today, at least in most of them, and to adapting it to most cultures. Technology has broken down the barriers of time and space, and it will break down the language barrier as well by undertaking a process of translation and localization and through a new definition of globalization that takes into consideration these two processes.

Keywords: globalization, internet, localization, translation

Procedia PDF Downloads 348
13254 Battle on Historical Water: An Analysis Roots of conflict between India and Sri Lanka and Victimization of Arrested Indian Fishermen

Authors: Xavier Louis, Madhava Soma Sundaram

Abstract:

The Palk Bay, a narrow strip of water, separates the state of Tamil Nadu in India from north Sri Lanka. The bay, which is 137 km in length and varies from 64 to 137 kilometers in width and is home to more than 580 fish species and chunks of shrimp’s resources, is divided by the International Maritime Boundary Line (IMBL). The bay, bordering it are five Tamil Nadu districts of India and three Sri Lankan districts and assumes importance as it is one of the areas presenting permanent and serious challenges to both India and Sri Lanka with respect to the fishing rights in the Bay. Fishermen from both sides were enjoying fishing with hormones for centuries. Katchchadeevu is a tiny Island located in the Bay, which was a part of India. After the Katchchadeevu agreement 1974 it became a part of Sri Lanka and a fishing conflict arose between the two countries' fishermen. Fuelling the dispute over Katchatheevu is the overfishing of Indian mechanized trawlers in Palk Bay and the damaging environmental and economic effects of trawling. Since 2008, more than 300 Indian fishermen have been killed by firing by Sri Lankan Navy, nearly 100 fishermen have gone missing and more than 3000 fishermen were arrested and later released after the trials for trespassing into Sri Lankan waters. Currently, more than 120 fishing boats and 29 fishermen are in Sri Lankan custody. This paper attempts to find out the causes of fishing conflict and who has the fishing rights in the mentioned waters, how the international treaties are complied with at the time of arrest and trials, how the arrested fishermen are treated by them and how they suffer from fishermen families without a breadwinner. A Semi-structured interview schedule tool was prepared by the researcher, which is suitable for measuring quantitative and qualitative aspects of the above-mentioned theme. One hundred arrested fishermen were interviewed and recorded their prison experiences in Sri Lanka. The research found that the majority of the fishermen believe that they have the right to fish in the historical water and that the Sri Lankan Naval personnel have brutally attacked the Indian fishermen at the time of the arrest. The majority of the fishermen accepted that they had limited fishing grounds. As a result, they entered Sri Lankan waters for their livelihood. The majority of the fishermen expected that they would also get their belongings back at the time of release, primarily the boats. Most of the arrested fishermen's families face financial crises in the absence of their breadwinners and this situation has created conditions for child labor among the affected families and some fishers migrate to different places for different occupations. The majority of the fishers have trauma about their victimization and face uncertainty in the future of their occupation. We can discuss more the causes and nature of the fishing conflict and the financial and psychological victimization of Indian fishermen in relation to the conflict.

Keywords: palk bay, historical water, fishing conflict, arrested fishermen, victimization

Procedia PDF Downloads 65
13253 Groundhog Day as a Model for the Repeating Spectator and the Film Academic: Re-Watching the Same Films Again Can Create Different Experiences and Ideas

Authors: Leiya Ho Yin Lee

Abstract:

Groundhog Day (Harold Ramis, 1993) may seemingly be a fairly unremarkable Hollywood comedy film in the 90s, it is argued that the film, with its protagonist Phil (Bill Murray), inadvertently, but perfectly, demonstrates an important aspect in filmmaking, film spectatorship and film research: repetition. Very rarely does a narrative film use one, and only one, take in its shooting. The multiple ‘repeats’ of Phil’s various endeavours due to his being trapped in a perpetual loop of the same day — from stealing money and tricking a woman into a casual relationship, to his multiple suicides, to eventually helping people in need — make the process of doing multiple ‘takes’ in filmmaking explicit. But perhaps more significantly, Phil represents a perfect model for the spectator/cinephile who has seen their favourite film for multiple times that they can remember every single detail. Crucially, their favourite film never changes, as it is a recording, but the cinephile’s experience of that very same film is most likely different each time they watch it again, just as Phil’s character and personality has completely transformed, from selfish and egotistic, to depressed and nihilistic, and ultimately to sympathetic and caring, even though he is living the exact same day. Furthermore, the author did not come up with this stimulating juxtaposition of film spectatorship and Groundhog Day the first time the author saw the film; it took the author a few casual re-viewings to notice the film’s self-reflexivity. And then, when working on it in the author’s research, the author had to re-view the film for more times, and have subsequently noticed even more things previously unnoticed. In this way, Groundhog Day not only stands for a model for filmmaking and film spectatorship, it also illustrates the act of academic research, especially in Film Studies where repeatedly viewing the same films is a prerequisite before new ideas and concepts are discovered from old material. This also recalls Deleuze’s thesis on difference and repetition in that repetition creates difference and it is difference that creates thought.

Keywords: narrative comprehension, repeated viewing, repetition, spectatorship

Procedia PDF Downloads 306
13252 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 124
13251 A Descriptive Study on Micro Living and Its Importance over Large Houses by Understanding Various Scenarios and Case Studies

Authors: Belal Neazi

Abstract:

'Larger Houses Consume More Resources’ – both in construction and during operation. The most important aspect of smaller homes is that it uses less electricity and fuel for construction and maintenance. Here, an urban interpretation of the contemporary minimal existence movement is explained. In an attempt to restrict urban decay and to encourage inner-city renewal, the Tiny House principles are interpreted as alternative ways of dwelling in urban neighbourhoods. These tiny houses are usually pretty different from each other in interior planning, but almost similar in size. The disadvantage of large homes came up when people were asked to vacate as they were not able to pay the massive amount of mortgages. This made them reconsider their housing situation and discover the ideas of minimalism and the general rising inclination in environmental awareness that serve as the basis for the tiny house movement. One of the largest benefits of inhabiting a tiny house is the decrease in carbon footprint. Also, to increase social behaviour and freedom. It’s better for the environmental concern, financial concerns, and desire for more time and freedom. Examples of the tiny house village which are sustaining homeless population and the use of different reclaimed materials for the construction of these tiny houses are explained in the paper. It is proposed in the paper, that these houses will reflect the diversity while proposing an alternative model for the rehabilitation of decaying row-homes and the renewal of fading communities. The core objective is to design small or micro spaces for the economically backward people of the place and increase their social behaviour and freedom. Also, it’s better for the environmental concern, financial concerns, and desire for more time and freedom.

Keywords: city renewal, environmental concern, micro-living, tiny house

Procedia PDF Downloads 163