Search results for: wind direction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2785

Search results for: wind direction

85 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 156
84 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 165
83 LES Simulation of a Thermal Plasma Jet with Modeled Anode Arc Attachment Effects

Authors: N. Agon, T. Kavka, J. Vierendeels, M. Hrabovský, G. Van Oost

Abstract:

A plasma jet model was developed with a rigorous method for calculating the thermophysical properties of the gas mixture without mixing rules. A simplified model approach to account for the anode effects was incorporated in this model to allow the valorization of the simulations with experimental results. The radial heat transfer was under-predicted by the model because of the limitations of the radiation model, but the calculated evolution of centerline temperature, velocity and gas composition downstream of the torch exit corresponded well with the measured values. The CFD modeling of thermal plasmas is either focused on development of the plasma arc or the flow of the plasma jet outside of the plasma torch. In the former case, the Maxwell equations are coupled with the Navier-Stokes equations to account for electromagnetic effects which control the movements of the anode arc attachment. In plasma jet simulations, however, the computational domain starts from the exit nozzle of the plasma torch and the influence of the arc attachment fluctuations on the plasma jet flow field is not included in the calculations. In that case, the thermal plasma flow is described by temperature, velocity and concentration profiles at the torch exit nozzle and no electromagnetic effects are taken into account. This simplified approach is widely used in literature and generally acceptable for plasma torches with a circular anode inside the torch chamber. The unique DC hybrid water/gas-stabilized plasma torch developed at the Institute of Plasma Physics of the Czech Academy of Sciences on the other hand, consists of a rotating anode disk, located outside of the torch chamber. Neglecting the effects of the anode arc attachment downstream of the torch exit nozzle leads to erroneous predictions of the flow field. With the simplified approach introduced in this model, the Joule heating between the exit nozzle and the anode attachment position of the plasma arc is modeled by a volume heat source and the jet deflection caused by the anode processes by a momentum source at the anode surface. Furthermore, radiation effects are included by the net emission coefficient (NEC) method and diffusion is modeled with the combined diffusion coefficient method. The time-averaged simulation results are compared with numerous experimental measurements. The radial temperature profiles were obtained by spectroscopic measurements at different axial positions downstream of the exit nozzle. The velocity profiles were evaluated from the time-dependent evolution of flow structures, recorded by photodiode arrays. The shape of the plasma jet was compared with charge-coupled device (CCD) camera pictures. In the cooler regions, the temperature was measured by enthalpy probe downstream of the exit nozzle and by thermocouples in radial direction around the torch nozzle. The model results correspond well with the experimental measurements. The decrease in centerline temperature and velocity is predicted within an acceptable range and the shape of the jet closely resembles the jet structure in the recorded images. The temperatures at the edge of the jet are underestimated due to the absence of radial radiative heat transfer in the model.

Keywords: anode arc attachment, CFD modeling, experimental comparison, thermal plasma jet

Procedia PDF Downloads 338
82 Nanoporous Activated Carbons for Fuel Cells and Supercapacitors

Authors: A. Volperts, G. Dobele, A. Zhurinsh, I. Kruusenberg, A. Plavniece, J. Locs

Abstract:

Nowadays energy consumption constantly increases and development of effective and cheap electrochemical sources of power, such as fuel cells and electrochemical capacitors, is topical. Due to their high specific power, charge and discharge rates, working lifetime supercapacitor based energy accumulation systems are more and more extensively being used in mobile and stationary devices. Lignocellulosic materials are widely used as precursors and account for around 45% of the total raw materials used for the manufacture of activated carbon which is the most suitable material for supercapacitors. First part of our research is devoted to study of influence of main stages of wood thermochemical activation parameters on activated carbons porous structure formation. It was found that the main factors governing the properties of carbon materials are specific surface area, volume and pore size distribution, particles dispersity, ash content and oxygen containing groups content. Influence of activated carbons attributes on capacitance and working properties of supercapacitor are demonstrated. The correlation between activated carbons porous structure indices and electrochemical specifications of supercapacitors with electrodes made from these materials has been determined. It is shown that if synthesized activated carbons are used in supercapacitors then high specific capacitances can be reached – more than 380 F/g in 4.9M sulfuric acid based electrolytes and more than 170 F/g in 1 M tetraethylammonium tetrafluoroborate in acetonitrile electrolyte. Power specifications and minimal price of H₂-O₂ fuel cells are limited by the expensive platinum-based catalysts. The main direction in development of non-platinum catalysts for the oxygen reduction is the study of cheap porous carbonaceous materials which can be obtained by the pyrolysis of polymers including renewable biomass. It is known that nitrogen atoms in carbon materials to a high degree determine properties of the doped activated carbons, such as high electrochemical stability, hardness, electric resistance, etc. The lack of sufficient knowledge on the doping of the carbon materials calls for the ongoing researches of properties and structure of modified carbon matrix. In the second part of this study, highly porous activated carbons were synthesized using alkali thermochemical activation from wood, cellulose and cellulose production residues – craft lignin and sewage sludge. Activated carbon samples were doped with dicyandiamide and melamine for the application as fuel cell cathodes. Conditions of nitrogen introduction (solvent, treatment temperature) and its content in the carbonaceous material, as well as porous structure characteristics, such as specific surface and pore size distribution, were studied. It was found that efficiency of doping reaction depends on the elemental oxygen content in the activated carbon. Relationships between nitrogen content, porous structure characteristics and electrodes electrochemical properties are demonstrated.

Keywords: activated carbons, low-temperature fuel cells, nitrogen doping, porous structure, supercapacitors

Procedia PDF Downloads 95
81 Morphotropic Phase Boundary in Ferromagnets: Unusual Magnetoelastic Behavior In Tb₁₋ₓNdₓCo₂

Authors: Adil Murtaza, Muhammad Tahir Khan, Awais Ghani, Chao Zhou, Sen Yang, Xiaoping Song

Abstract:

The morphotropic phase boundary (MPB); a boundary between two different crystallographic symmetries in the composition–temperature phase diagram has been widely studied in ferroelectrics and recently has drawn interest in ferromagnets for obtaining enhanced large field-induced strain. At MPB, the system gets a compressed free energy state, which allows the polarization to freely rotate and hence results in a high magnetoelastic response (e.g., high magnetization, low coercivity, and large magnetostriction). Based on the same mechanism, we designed MPB in a ferromagnetic Tb₁₋ₓNdₓCo₂ system. The temperature-dependent magnetization curves showed spin reorientation (SR); which can be explained by a two-sublattice model. Contrary to previously reported MPB involved ferromagnetic systems, the MPB composition of Tb₀.₃₅Nd₀.₆₅Co₂ exhibits a low saturation magnetization (MS), indicating a compensation of the Tb and Nd magnetic moments at MPB. The coercive field (HC) under a low magnetic field and first anisotropy constant (K₁) shows a minimum value at MPB composition of x=0.65. A detailed spin configuration diagram is provided for the Tb₁₋ₓNdₓCo₂ around the composition for the anisotropy compensation; this can guide the development of novel magnetostrictive materials. The anisotropic magnetostriction (λS) first decreased until x=0.8 and then continuously increased in the negative direction with further increase of Nd concentration. In addition, the large ratio between magnetostriction and the absolute values of the first anisotropy constant (λS/K₁) appears at MPB, indicating that Tb₀.₃₅Nd₀.₆₅Co₂ has good magnetostrictive properties. Present work shows an anomalous type of MPB in ferromagnetic materials, revealing that MPB can also lead to a weakening of magnetoelastic behavior as shown in the ferromagnetic Tb₁₋ₓNdₓCo₂ system. Our work shows the universal presence of MPB in ferromagnetic materials and suggests the differences between different ferromagnetic MPB systems that are important for substantial improvement of magnetic and magnetostrictive properties. Based on the results of this study, similar MPB effects might be achieved in other ferroic systems that can be used for technological applications. The finding of magnetic MPB in the ferromagnetic system leads to some important significances. First, it provides a better understanding of the fundamental concept of spin reorientation transitions (SRT) like ferro-ferro transitions are not only reorientation of magnetization but also crystal symmetry change upon magnetic ordering. Second, the flattened free energy corresponding to a low energy barrier for magnetization rotation and enhanced magnetoelastic response near MPB. Third, to attain large magnetostriction with MPB approach two terminal compounds have different easy magnetization directions below Curie temperature Tc in order to accomplish the weakening of magnetization anisotropy at MPB (as in ferroelectrics), thus easing the magnetic domain switching and the lattice distortion difference between two terminal compounds should be large enough, e.g., lattice distortion of R symmetry ˃˃ lattice distortion of T symmetry). So that the MPB composition agrees to a nearly isotropic state along with large ‘net’ lattice distortion, which is revealed in a higher value of magnetostriction.

Keywords: magnetization, magnetostriction, morphotropic phase boundary (MPB), phase transition

Procedia PDF Downloads 116
80 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 111
79 Learning from the Positive to Encourage Compliance with Workplace Health and Safety

Authors: Amy Williamson, Kerry Armstrong, Jason Edwards, Patricia Obst

Abstract:

Australian national policy endorses a responsive approach to work health and safety (WHS) regulation, combining positive motivators (education and guidance), with compliance monitoring and enforcement to encourage and secure compliance with legislation. Despite theoretical support for responsive regulation, there is limited evidence regarding how to achieve best results in practice. Using positive psychology as a novel paradigm, this study aims to investigate how non-punitive regulatory interactions can be improved to further encourage regulatory compliance in the construction industry. As part of a larger project, semi-structured interviews were conducted with 35 inspectorate staff and 11 managers in the Australian (Queensland) construction industry. Using an inductive, grounded approach, an in-depth qualitative investigation was conducted to identify the positive psychological principles which underpin effective use of the non-punitive aspects of responsive regulation. Results highlighted the importance of effective engagement between inspectors and industry managers. This involved the need to interact cooperatively and encourage compliance with WHS legislation. Several strategies were identified that assisted regulatory interactions and the ability of inspectors to engage. The importance of communication and interpersonal skills was reported to be critical to any interaction, regardless of the nature of the visit and regulatory tools used. In particular, the use of clear and open communication fostered trust and rapport which facilitated more positive interactions. The importance of respect and empathy was also highlighted. The need for provision of guidance and direction on how to achieve compliance was also reported. This related to ensuring companies understand their WHS obligations, providing specific advice regarding how to rectify a breach and meet compliance requirements, and ensuring sufficient follow up to confirm that compliance is successfully achieved. In the absence of imminent risk, allowing companies the opportunity to comply before further action is taken was also highlighted. Increased proactive engagement with industry to educate and promote the vision of safety at work was also reported. Finally, provision of praise and positive feedback was reported to assist interactions and encourage the continuation of good practices. Evidence from positive psychology and organisational psychology was obtained to support the use of each strategy in practice. In particular, the area of positive leadership provided a useful framework to consider the factors and conditions that drive positive interactions within the context of work health and safety and the specific relationship between inspectors and industry managers. This study provides fresh insight into key psychological principles which support non-punitive regulatory interactions in the area of workplace health and safety. The findings of this research contribute to a better understanding of how inspectors can enhance the efficacy of their regulatory interactions to improve compliance with legislation. Encouraging and assisting compliance through effective non-punitive activity offers a sustainable pathway for promoting safety and preventing fatalities and injuries in the construction industry.

Keywords: engagement, non-punitive approaches to compliance, positive interactions in the workplace, work health and safety compliance

Procedia PDF Downloads 125
78 Archaeoseismological Evidence for a Possible Destructive Earthquake in the 7th Century AD at the Ancient Sites of Bulla Regia and Chemtou (NW Tunisia): Seismotectonic and Structural Implications

Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Ali Kadri, Said Maouche, Hayet Khayati Ammar, Ahmed Braham

Abstract:

The historic sites of Bulla Regia and Chemtou are among the most important archaeological monuments in northwestern Tunisia, which flourished as large, wealthy settlements during the Roman and Byzantine periods (2nd to 7th centuries AD). An archaeoseismological study provides the first indications about the impact of a possible ancient strong earthquake in the destruction of these cities. Based on previous archaeological excavation results, including numismatic evidence, pottery, economic meltdown and urban transformation, the abrupt ruin and destruction of the cities of Bulla Regia and Chemtou can be bracketed between 613 and 647 AD. In this study, we carried out the first attempt to use the analysis of earthquake archaeological effects (EAEs) that were observed during our field investigations in these two historic cities. The damage includes different types of EAEs: folds on regular pavements, displaced and deformed vaults, folded walls, tilted walls, collapsed keystones in arches, dipping broken corners, displaced-fallen columns, block extrusions in walls, penetrative fractures in brick-made walls and open fractures on regular pavements. These deformations are spread over 10 different sectors or buildings and include 56 measured EAEs. The structural analysis of the identified EAEs can indicate an ancient destructive earthquake that probably destroyed the Bulla Regia and Chemtou archaeological sites. We then analyzed these measurements using structural geological analysis to obtain the maximum horizontal strain of the ground (e.g., S ₕₘₐₓ) on each building-oriented damage. After the collection and analysis of these strain datasets, we proceed to plot the orientation of Sₕₘₐₓ trajectories on the map of the archaeological site (Bulla Regia). We concluded that the obtained Sₕₘₐₓ trajectories within this site could then be related to the mean direction of ground motion (oscillatory movement of the ground) triggered by a seismic event, as documented for some historical earthquakes across the world. These Sₕₘₐₓ orientations closely match the current active stress field, as highlighted by some instrumental events in northern Tunisia. In terms of the seismic source, we strongly suggest that the reactivation of a neotectonic strike-slip fault trending N50E must be responsible for this probable historic earthquake and the recent instrumental seismicity in this area. This fault segment, affecting the folded quaternary deposits south of Jebel Rebia, passes through the monument of Bulla Regia. Stress inversion of the observed and measured data along this fault shows an N150 - 160 trend of Sₕₘₐₓ under a transpressional tectonic regime, which is quite consistent with the GPS data and the state of the current stress field in this region.

Keywords: NW Tunisia, archaeoseismology, earthquake archaeological effect, bulla regia - Chemtou, seismotectonic, neotectonic fault

Procedia PDF Downloads 12
77 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development

Authors: Sinisa J. Vukicevic

Abstract:

Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.

Keywords: urban growth model, scenario development, spatial indicators, Metronamica

Procedia PDF Downloads 71
76 A Mixed-Methods Design and Implementation Study of ‘the Attach Project’: An Attachment-Based Educational Intervention for Looked after Children in Northern Ireland

Authors: Hannah M. Russell

Abstract:

‘The Attach Project’ (TAP), is an educational intervention aimed at improving educational and socio-emotional outcomes for children who are looked after. TAP is underpinned by Attachment Theory and is adapted from Dyadic Developmental Psychotherapy (DDP), which is a treatment for children and young people impacted by complex trauma and disorders of attachment. TAP has been implemented in primary schools in Northern Ireland throughout the 2018/19 academic year. During this time, a design and implementation study has been conducted to assess the promise of effectiveness for the future dissemination and ‘scaling-up’ of the programme for a larger, randomised control trial. TAP has been designed specifically for implementation in a school setting and is comprised of a whole school element and a more individualised Key Adult-Key Child pairing. This design and implementation study utilises a mixed-methods research design consisting of quantitative, qualitative, and observational measures with stakeholder input and involvement being considered an integral component. The use of quantitative measures, such as self-report questionnaires prior to and eight months following the implementation of TAP, enabled the analysis of the strengths and direction of relations between the various components of the programme, as well as the influence of implementation factors. The use of qualitative measures, incorporating semi-structured interviews and focus groups, enabled the assessment of implementation factors, identification of implementation barriers, and potential methods of addressing these issues. Observational measures facilitated the continual development and improvement of ‘TAP training’ for school staff. Preliminary findings have provided evidence of promise for the effectiveness of TAP and indicate the potential benefits of introducing this type of attachment-based intervention across other educational settings. This type of intervention could benefit not only children who are looked after but all children who may be impacted by complex trauma or disorders of attachment. Furthermore, findings from this study demonstrate that it is possible for children to form a secondary attachment relationship with a significant adult in school. However, various implementation factors which should be addressed were identified throughout the study, such as the necessity of protected time being introduced to facilitate the development of a positive Key Adult- Key Child relationship. Furthermore, additional ‘re-cap’ training is required in future dissemination of the programme, to maximise ‘attachment friendly practice’ in the whole staff team. Qualitative findings have also indicated that there is a general opinion across school staff that this type of Key Adult- Key Child pairing could be more effective if it was introduced as soon as children begin primary school. This research has provided ample evidence for the need to introduce relationally based interventions in schools, to help to ensure that children who are looked after, or who are impacted by complex trauma or disorders of attachment, can thrive in the school environment. In addition, this research has facilitated the identification of important implementation factors and barriers to implementation, which can be addressed prior to the ‘scaling-up’ of TAP for a robust, randomised controlled trial.

Keywords: attachment, complex trauma, educational interventions, implementation

Procedia PDF Downloads 146
75 Modern Architecture and the Scientific World Conception

Authors: Sean Griffiths

Abstract:

Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.

Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle

Procedia PDF Downloads 51
74 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 110
73 Field-Testing a Digital Music Notebook

Authors: Rena Upitis, Philip C. Abrami, Karen Boese

Abstract:

The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.

Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction

Procedia PDF Downloads 227
72 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 151
71 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion

Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov

Abstract:

Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.

Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing

Procedia PDF Downloads 109
70 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico

Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos

Abstract:

Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.

Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis

Procedia PDF Downloads 127
69 Natural Dyes: A Global Perspective on Commercial Solutions and Industry Players

Authors: Laura Seppälä, Ana Nuutinen

Abstract:

Environmental concerns are increasing the interest in the potential uses of natural dyes. Natural dyes are more safe and environmentally friendly option than synthetic dyes. However, one must be also cautious with natural dyes, because, for example, some dyestuff such as plants or mushrooms, as well as some mordants are poisonous. By natural dyes we mean dyes that are derived from plants, fungi, bark, lichens, algae, insects, and minerals. Different plant parts, such as stems, leaves, flowers, roots, bark, berries, fruits, and cones, can be utilized for textile dyeing and printing, pigment manufacture, and other processes depending on the season. They may be utilized to produce distinctive colour tones that are challenging to do with synthetic dyes. This adds value to textiles and makes them stand out. Synthetic dyes quickly replaced natural dyes, after being developed in the middle of the 19th century, but natural dyes have remained the dyeing method of crafters until recently. This research examines the commercial solutions for natural dyes in many parts of the world, such as Europe, the United States, South America, Africa, Asia, New Zealand, and Australia. This study aims to determine the commercial status of natural dyes. Each continent has its own traditions and specific dyestuffs. The availability of natural dyes can vary depending on several aspects, including plant species, temperature, and harvesting techniques, which poses a challenge to the work of designers and crafters. While certain plants may only provide dyes during specific seasons, others may do so continuously. To find the ideal time to collect natural dyes, it is critical to research various plant species and their harvesting techniques. Furthermore, to guarantee the quality and colour of the dye, plant material must be handled and processed properly. This research was conducted via an internet search, and results were searched systematically for commercial stakeholders in the field. The research question looked at commercial players in the field of natural dyes. This qualitative case study interpreted the data using thematic analysis. Each webpage was screenshotted and analyzed in reflection on to research question. Online content analysis means systematically coding and analyzing qualitative data. The most evident result was that the natural dyes interest in different parts of the World. There are clothing collections dyed with natural dyes, dyestuff stores, and courses for natural dyeing. This article presents the designers who work with natural dyes and actors who are involved with the natural dye industry. Several websites emphasized the safety and environmental benefits of natural dyes. Many of them included eye-catching images of textiles dyed naturally, and the colours of such dyes are thought to be attractive since they are beautiful and natural hues. The search did not find big-scale industrial solutions for natural dyes, but there were several instances of dyeing with natural dyes. Understanding the players, designers, and stakeholders in the natural dye business is the purpose of this article. The comprehension of the current state of the art illustrates the direction that the natural dye business is currently taking.

Keywords: commercial solutions, environmental issues, key stakeholders, natural dyes, sustainability, textile dyeing

Procedia PDF Downloads 30
68 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model

Authors: Muhammad Karim Ahmadzai

Abstract:

Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.

Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis

Procedia PDF Downloads 88
67 Concept of Tourist Village on Kampung Karaton of Karaton Kasunanan Surakarta, Central Java, Indonesia

Authors: Naniek Widayati Priyomarsono

Abstract:

Introduction: In beginning of Karaton formation, namely, era of Javanese kingdom town had the power region outside castle town (called as Mancanegara), settlement of karaton can function as “the space-between” and “space-defense”, besides it was one of components from governmental structure and karaton power at that time (internal servant/abdi dalem and sentana dalem). Upon the Independence of Indonesia in 1945 “Kingdom-City” converted its political status into part of democratic town managed by statutes based on the classification. The latter affects local culture hierarchy alteration due to the physical development and events. Dynamics of social economy activities in Kampung Karaton surrounded by buildings of Complex of Karaton Kasunanan ini, have impact on the urban system disturbed into the región. Also cultural region image fades away with the weak visual access from existant cultural artefacts. That development lacks of giving appreciation to the established region image providing identity of Karaton Kasunanan particularly and identity of Surakarta city in general. Method used is strategy of grounded theory research (research providing strong base of a theory). Research is focused on actors active and passive relevantly getting involved in change process of Karaton settlement. Data accumulated is “Investigation Focus” oriented on actors affecting that change either internal or external. Investigation results are coupled with field observation data, documentation, literature study, thus it takes accurate findings. Findings: Karaton village has potential products as attraction, possessing human resource support, strong motivation from society still living in that settlement, possessing facilities and means supports, tourism event-supporting facilities, cultural art institution, available fields or development area. Data analyzed: To get the expected result it takes restoration in social cultural development direction, and economy, with ways of: Doing social cultural development strategy, economy, and politics. To-do steps are program socialization of Karaton village as Tourism Village, economical development of local society, regeneration pattern, filtering, and selection of tourism development, integrated planning system development, development with persuasive approach, regulation, market mechanism, social cultural event sector development, political development for region activity sector. Summary: In case the restoration is done by getting society involved as subject of that settlement (active participation in the field), managed and packed interestingly and naturally with tourism-supporting facilities development, village of Karaton Kasunanan Surakarta is ready to receive visit of domestic and foreign tourists.

Keywords: karaton village, finding, restoration, economy, Indonesia

Procedia PDF Downloads 409
66 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 53
65 The Display of Age-Period/Age-Cohort Mortality Trends Using 1-Year Intervals Reveals Period and Cohort Effects Coincident with Major Influenza A Events

Authors: Maria Ines Azambuja

Abstract:

Graphic displays of Age-Period-Cohort (APC) mortality trends generally uses data aggregated within 5 or 10-year intervals. Technology allows one to increase the amount of processed data. Displaying occurrences by 1-year intervals is a logic first step in the direction of attaining higher quality landscapes of variations in temporal occurrences. Method: 1) Comparison of UK mortality trends plotted by 10-, 5- and 1-year intervals; 2) Comparison of UK and US mortality trends (period X age and cohort X age) displayed by 1-year intervals. Source: Mortality data (period, 1x1, males, 1933-1912) uploaded from the Human Mortality Database to Excel files, where Period X Age and Cohort X Age graphics were produced. The choice of transforming age-specific trends from calendar to birth-cohort years (cohort = period – age) (instead of using cohort 1x1 data available at the HMD resource) was taken to facilitate the comparison of age-specific trends when looking across calendar-years and birth-cohorts. Yearly live births, males, 1933 to 1912 (UK) were uploaded from the HFD. Influenza references are from the literature. Results: 1) The use of 1-year intervals unveiled previously unsuspected period, cohort and interacting period x cohort effects upon all-causes mortality. 2) The UK and US figures showed variations associated with particular calendar years (1936, 1940, 1951, 1957-68, 72) and, most surprisingly, with particular birth-cohorts (1889-90 in the US, and 1900, 1918-19, 1940-41 and 1946-47, in both countries. Also, the figures showed ups and downs in age-specific trends initiated at particular birth-cohorts (1900, 1918-19 and 1947-48) or a particular calendar-year (1968, 1972, 1977-78 in the US), variations at times restricted to just a range of ages (cohort x period interacting effects). Importantly, most of the identified “scars” (period and cohort) correlates with the record of occurrences of Influenza A epidemics since the late 19th Century. Conclusions: The use of 1-year intervals to describe APC mortality trends both increases the amount of information available, thus enhancing the opportunities for patterns’ recognition, and increases our capability of interpreting those patterns by describing trends across smaller intervals of time (period or birth-cohort). The US and the UK mortality landscapes share many but not all 'scars' and distortions suggested here to be associated with influenza epidemics. Different size-effects of wars are evident, both in mortality and in fertility. But it would also be realistic to suppose that the preponderant influenza A viruses circulating in UK and US at the beginning of the 20th Century might be different and the difference to have intergenerational long-term consequences. Compared with the live births trend (UK data), birth-cohort scars clearly depend on birth-cohort sizes relatives to neighbor ones, which, if causally associated with influenza, would result from influenza-related fetal outcomes/selection. Fetal selection could introduce continuing modifications on population patterns of immune-inflammatory phenotypes that might give rise to 'epidemic constitutions' favoring the occurrence of particular diseases. Comparative analysis of mortality landscapes may help us to straight our record of past circulation of Influenza viruses and document associations between influenza recycling and fertility changes.

Keywords: age-period-cohort trends, epidemic constitution, fertility, influenza, mortality

Procedia PDF Downloads 201
64 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model

Authors: M. Reza Hashemi, Chris Small, Scott Hayward

Abstract:

The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.

Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines

Procedia PDF Downloads 82
63 Host Preference, Impact of Host Transfer and Insecticide Susceptibility among Aphis gossypii Group (Order: Hemiptera) in Jamaica

Authors: Desireina Delancy, Tannice Hall, Eric Garraway, Dwight Robinson

Abstract:

Aphis gossypii, as a pest, directly damages its host plant by extracting phloem sap (sucking) and indirectly damages it by the transmission of viruses, ultimately affecting the yield of the host. Due to its polyphagous nature, this species affects a wide range of host plants, some of which may serve as a reservoir for colonisation of important crops. In Jamaica, there have been outbreaks of viral plant pathogens that were transmitted by Aphis gossypii. Three such examples are Citrus tristeza virus, the Watermelon mosaic virus, and Papaya ringspot virus. Aphis gossypii also heavily colonized economically significant host plants, including pepper, eggplant, watermelon, cucumber, and hibiscus. To facilitate integrated pest management, it is imperative to understand the biology of the aphid and its host preference. Preliminary work in Jamaica has indicated differences in biology and host preference, as well as host variety within the species. However, specific details of fecundity, colony growth, host preference, distribution, and insecticide resistance of Aphis gossypii were unknown to the best of our knowledge. The aim was to investigate the following in relation to Aphis gossypii: influence of the host plant on colonization, life span, fecundity, population size, and morphology; the impact of host transfer on fecundity and population size as a measure of host preference and host transfer success and susceptibility to four commonly used insecticides. Fecundity and colony size were documented daily from aphids acclimatized on Capsicum chinense Jacquin 1776, Cucumis sativus Linnaeus 1630, Gossypium hirsutum Linnaeus 1751 and Abelmoschus esculentus (L.) Moench 1794 for three generations. The same measures were used after third instar aphids were transferred among the hosts as a measure of suitability and success. Mortality, and fecundity of survivors, were determined after aphids were exposed to varying concentrations of Actara®, Diazinon™, Karate Zeon®, and Pegasus®. Host preference results indicated that, over a 24-day period, Aphis gossypii reached its largest colony size on G. hirsutum (x̄ 381.80), with January – February being the most fecund period. Host transfer experiments were all significantly different, with the most significant occurring between transfers from C. chinense to C. sativus (p < 0.05). Colony sizes were found to increase significantly every 5 days, which has implications for regimes implemented to monitor and evaluate plots. Insecticides ranked on lethality are Karate Zeon®> Actara®> Pegasus® > Diazinon™. The highest LC50 values were obtained for aphids on G. hirsutum and C. chinense was with Pegasus® and for those on C. sativus with Diazinon™. Survivors of insecticide treatments had colony sizes on average that were 98 % less than untreated aphids. Cotton was preferred both in the field and in the glasshouse. It is on cotton the aphids settled first, had the highest fecundity, and the lowest mortality. Cotton can serve as reservoir for (re)populating other cotton or different host species based on migration due to overcrowding, heavy showers, high wind, or ant attendance. Host transfer success between all three hosts is highly probable within an intercropping system. Survivors of insecticide treatments can successfully repopulate host plants.

Keywords: Aphis gossypii, host-plant preference, colonization sequence, host transfers, insecticide susceptibility

Procedia PDF Downloads 56
62 On the Lithology of Paleocene-Lower Eocene Deposits of the Achara-Trialeti Fold Zone: The Lesser Caucasus

Authors: Nino Kobakhidze, Endi Varsimashvili, Davit Makadze

Abstract:

The Caucasus is a link of the Alpine-Himalayan fold belt and involves the Greater Caucasus and the Lesser Caucasus fold systems and the Intermountain area. The study object is located within the northernmost part of the Lesser Caucasus orogen, in the eastern part of Achara-Trialeti fold -thrust belt. This area was rather well surveyed in 70th of the twentieth century in terms of oil-and-gas potential, but to our best knowledge, detailed sedimentological studies have not been conducted so far. In order to fill this gap, the authors of the present thesis started research in this direction. One of the objects selected for the research was the deposits of the Kavtura river valley situated on the northern slope of the Trialeti ridge. Paleocene-Lower Eocene deposits known in scientific literature as ‘Borjomi Flysch’ (Turbidites) are exposed in the mentioned area. During the research, the following methodologies were applied: selection of key cross sections, a collection of rock samples, microscopic description of thin sections, mineralogical and petrological analysis of material and identification of trace fossils. The study of Paleocene-Lower Eocene deposits starts with Kavtura river valley in the east, where they are well characterized by microfauna. The cross-section of the deposits starts with Danian variegated marlstone conformably overlain by the alternation of thick and thin-bedded sandstones (thickness 40-50 cm). They are continued with interbedded of thin-bedded sandstones and shales(thickness 4-5 m). On the sole surface of sandstones ichnogenera ‘Helmintopsis’ and ‘Scolicia’ are recorded and within the bed –‘Chondrites’ is found. Towards the Riverhead, there is a 1-2 m gap in sedimentation; then again the Paleocene-Lower Eocene sediments crop out. They starting with alternation of grey-green medium-grained sandstones and shales enclosing dark color plant detritus. They are overlain by the interbedded of calcareous sandstones and marls, where the thickness of sandstones is variable (20-70 cm). Ichnogenus – ‘Scolicia’ is found here. Upwards the above-mentioned deposits pass into Middle Eocenian volcanogenic-sedimentary suits. In the Kavtura river valley, the thickness of the Paleocene-Lower Eocene deposits is 300-400 m. In the process of research, the following activities are conducted: the facial analysis of host rocks, correlation of the study section with other cross sections and interpretation of depositional environment of the area. In the area the authors have found and described ichnogenera; their preliminary determination have shown that they belong to pre-depositional (‘Helmintopsis’) and post-depositional (‘Chondrites’) forms. As known, during the Cretaceous-Paleogene time, the Achara-Trialeti fold-thrust belt extensional basin was the accumulation area with great thicknesses (from shallow to deep marine sediments). It is confirmed once more by the authors investigations preliminary results of paleoichnological studies inclusive.

Keywords: flysh deposits, lithology, The Lesser Caucasus, trace fossils

Procedia PDF Downloads 130
61 Heat Accumulation in Soils of Belarus

Authors: Maryna Barushka, Aleh Meshyk

Abstract:

The research analyzes absolute maximum soil temperatures registered at 36 gauge stations in Belarus from 1950 to 2013. The main method applied in the research is cartographic, in particular, trend surface analysis. Warming that had never been so long and intensive before started in 1988. The average temperature in January and February of that year exceeded the norm by 7-7.5 С, in March and April by 3-5С. In general, that year, as well as the year of 2008, happened to be the hottest ones in the whole period of instrumental observation. Yearly average air temperature in Belarus in those years was +8.0-8.2 С, which exceeded the norm by 2.0 – 2.2 С. The warming has been observed so far. The only exception was in 1996 when the yearly average air temperature in Belarus was below normal by 0.5 С. In Belarus the value of trend line of standard temperature deviation in the warmest months (July-August) has been positive for the past 25 years. In 2010 absolute maximum air and soil temperature exceeded the norm at 15 gauge stations in Belarus. The structure of natural processes includes global, regional, and local constituents. Trend surface analysis of the investigated characteristics makes it possible to determine global, regional, and local components. Linear trend surface shows the occurrence of weather deviations on a global scale, outside Belarus. Maximum soil temperature appears to be growing in the south-west direction with the gradient of 5.0 С. It is explained by the latitude factor. Polynomial trend surfaces show regional peculiarities of Belarus. Extreme temperature regime is formed due to some factors. The prevailing one is advection of turbulent flow of the ground layer of the atmosphere. In summer influence of the Azores High producing anticyclones is great. The Gulf Stream current forms the values of temperature trends in a year period. The most intensive flow of the Gulf Stream in the second half of winter and the second half of summer coincides with the periods of maximum temperature trends in Belarus. It is possible to estimate a local component of weather deviations in the analysis of the difference in values of the investigated characteristics and their trend surfaces. Maximum positive deviation (up to +4 С) of averaged soil temperature corresponds to the flat terrain in Pripyat Polesie, Brest Polesie, and Belarusian Poozerie Area. Negative differences correspond to the higher relief which partially compensates extreme heat regime of soils. Another important factor for maximum soil temperature in these areas is peat-bog soils with the least albedo of 8-15%. As yearly maximum soil temperature reaches 40-60 С, this could be both negative and positive factors for Belarus’s environment and economy. High temperature causes droughts resulting in crops dying and soil blowing. On the other hand, vegetation period has lengthened thanks to bigger heat resources, which allows planting such heat-loving crops as melons and grapes with appropriate irrigation. Thus, trend surface analysis allows determining global, regional, and local factors in accumulating heat in the soils of Belarus.

Keywords: soil, temperature, trend surface analysis, warming

Procedia PDF Downloads 105
60 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas

Authors: A. Odoom, A. Salama, H. Ibrahim

Abstract:

Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.

Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model

Procedia PDF Downloads 118
59 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills

Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel

Abstract:

Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.

Keywords: social and emotional skills, psychometrics, MTMM, IRT

Procedia PDF Downloads 47
58 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials

Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte

Abstract:

Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.

Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance

Procedia PDF Downloads 43
57 Immobilization of Superoxide Dismutase Enzyme on Layered Double Hydroxide Nanoparticles

Authors: Istvan Szilagyi, Marko Pavlovic, Paul Rouster

Abstract:

Antioxidant enzymes are the most efficient defense systems against reactive oxygen species, which cause severe damage in living organisms and industrial products. However, their supplementation is problematic due to their high sensitivity to the environmental conditions. Immobilization on carrier nanoparticles is a promising research direction towards the improvement of their functional and colloidal stability. In that way, their applications in biomedical treatments and manufacturing processes in the food, textile and cosmetic industry can be extended. The main goal of the present research was to prepare and formulate antioxidant bionanocomposites composed of superoxide dismutase (SOD) enzyme, anionic clay (layered double hydroxide, LDH) nanoparticle and heparin (HEP) polyelectrolyte. To characterize the structure and the colloidal stability of the obtained compounds in suspension and solid state, electrophoresis, dynamic light scattering, transmission electron microscopy, spectrophotometry, thermogravimetry, X-ray diffraction, infrared and fluorescence spectroscopy were used as experimental techniques. LDH-SOD composite was synthesized by enzyme immobilization on the clay particles via electrostatic and hydrophobic interactions, which resulted in a strong adsorption of the SOD on the LDH surface, i.e., no enzyme leakage was observed once the material was suspended in aqueous solutions. However, the LDH-SOD showed only limited resistance against salt-induced aggregation and large irregularly shaped clusters formed during short term interval even at lower ionic strengths. Since sufficiently high colloidal stability is a key requirement in most of the applications mentioned above, the nanocomposite was coated with HEP polyelectrolyte to develop highly stable suspensions of primary LDH-SOD-HEP particles. HEP is a natural anticoagulant with one of the highest negative line charge density among the known macromolecules. The experimental results indicated that it strongly adsorbed on the oppositely charged LDH-SOD surface leading to charge inversion and to the formation of negatively charged LDH-SOD-HEP. The obtained hybrid materials formed stable suspension even under extreme conditions, where classical colloid chemistry theories predict rapid aggregation of the particles and unstable suspensions. Such a stabilization effect originated from electrostatic repulsion between the particles of the same sign of charge as well as from steric repulsion due to the osmotic pressure raised during the overlap of the polyelectrolyte chains adsorbed on the surface. In addition, the SOD enzyme kept its structural and functional integrity during the immobilization and coating processes and hence, the LDH-SOD-HEP bionanocomposite possessed excellent activity in decomposition of superoxide radical anions, as revealed in biochemical test reactions. In conclusion, due to the improved colloidal stability and the good efficiency in scavenging superoxide radical ions, the developed enzymatic system is a promising antioxidant candidate for biomedical or other manufacturing processes, wherever the aim is to decompose reactive oxygen species in suspensions.

Keywords: clay, enzyme, polyelectrolyte, formulation

Procedia PDF Downloads 241
56 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 236