Search results for: rotor position estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4270

Search results for: rotor position estimation

100 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 65
99 Effects of Hydrogen Bonding and Vinylcarbazole Derivatives on 3-Cyanovinylcarbazole Mediated Photo-Cross-Linking Induced Cytosine Deamination

Authors: Siddhant Sethi, Yasuharu Takashima, Shigetaka Nakamura, Kenzo Fujimoto

Abstract:

Site-directed mutagenesis is a renowned technique to introduce specific mutations in the genome. To achieve site-directed mutagenesis, many chemical and enzymatic approaches have been reported in the past like disulphite induced genome editing, CRISPR-Cas9, TALEN etc. The chemical methods are invasive whereas the enzymatic approaches are time-consuming and expensive. Most of these techniques are unusable in the cellular application due to their toxicity and other limitations. Photo-chemical cytosine deamination, introduced in 2010, is one of the major technique for enzyme-free single-point mutation of cytosine to uracil in DNA and RNA, wherein, 3-cyanovinylcarbazole nucleoside (CNVK) containing oligodeoxyribonucleotide (ODN) having CNVK at -1 position to that of target cytosine is reversibly crosslinked to target DNA strand using 366 nm and then incubated at 90ºC to accommodate deamination. This technique is superior to enzymatic methods of site-directed mutagenesis but has a disadvantage that it requires the use of high temperature for the deamination step which restricts its applicability in the in vivo applications. This study has been focused on improving the technique by reducing the temperature required for deamination. Firstly, the photo-cross-linker, CNVK has been modified by replacing cyano group attached to vinyl group with methyl ester (OMeVK), amide (NH2VK), and carboxylic acid (OHVK) to observe the acceleration in the deamination of target cytosine cross-linked to vinylcarbazole derivative. Among the derivatives, OHVK has shown 2 times acceleration in deamination reaction as compared to CNVK, while the other two derivatives have shown deceleration towards deamination reaction. The trend of rate of deamination reaction follows the same order as that of hydrophilicity of the vinylcarbazole derivatives. OHVK being most hydrophilic has shown highest acceleration while OMeVK is least hydrophilic has proven to be least active for deamination. Secondly, in the related study, the counter-base of the target cytosine, guanine has been replaced by inosine, 2-aminopurine, nebularine, and 5-nitroindole having distinct hydrogen bonding patterns with target cytosine. Among the ODNs with these counter bases, ODN with inosine has shown 12 fold acceleration towards deamination of cytosine cross-linked to CNVK at physiological conditions as compared to guanosine. Whereas, when 2-aminopurine, nebularine, and 5-nitroindole were used, no deamination reaction took place. It can be concluded that inosine has potential to be used as the counter base of target cytosine for the CNVK mediated photo-cross-linking induced deamination of cytosine. The increase in rate of deamination reaction has been attributed to pattern and number of hydrogen bonding between the cytosine and counter base. One of the important factor is presence of hydrogen bond between exo-cyclic amino group of cytosine and the counter base. These results will be useful for development of more efficient technique for site-directed mutagenesis for C → U transformations in the DNA/RNA which might be used in the living system for treatment of various genetic disorders and genome engineering for making designer and non-native proteins.

Keywords: C to U transformation, DNA editing, genome engineering, ultra-fast photo-cross-linking

Procedia PDF Downloads 213
98 The Senior Traveler Market as a Competitive Advantage for the Luxury Hotel Sector in the UK Post-Pandemic

Authors: Feyi Olorunshola

Abstract:

Over the last few years, the senior travel market has been noted for its potential in the wider tourism industry. The tourism sector includes the hotel and hospitality, travel, transportation, and several other subdivisions to make it economically viable. In particular, the hotel attracts a substantial part of the expenditure in tourism activities as when people plan to travel, suitable accommodation for relaxation, dining, entertainment and so on is paramount to their decision-making. The global retail value of the hotel as of 2018 was significant for tourism. But, despite indications of the hotel to the tourism industry at large, very few empirical studies are available to establish how this sector can leverage on the senior demographic to achieve competitive advantage. Predominantly, studies on the mature market have focused on destination tourism, with a limited investigation on the hotel which makes a significant contribution to tourism. Also, several scholarly studies have demonstrated the importance of the senior travel market to the hotel, yet there is very little empirical research in the field which has explored the driving factors that will become the accepted new normal for this niche segment post-pandemic. Giving that the hotel already operates in a highly saturated business environment, and on top of this pre-existing challenge, the ongoing global health outbreak has further put the sector in a vulnerable position. Therefore, the hotel especially the full-service luxury category must evolve rapidly for it to survive in the current business environment. The hotel can no longer rely on corporate travelers to generate higher revenue since the unprecedented wake of the pandemic in 2020 many organizations have invented a different approach of conducting their businesses online, therefore, the hotel needs to anticipate a significant drop in business travellers. However, the rooms and the rest of the facilities must be occupied to keep their business operating. The way forward for the hotel lies in the leisure sector, but the question now is to focus on the potential demographics of travelers, in this case, the seniors who have been repeatedly recognized as the lucrative market because of increase discretionary income, availability of time and the global population trends. To achieve the study objectives, a mixed-method approach will be utilized drawing on both qualitative (netnography) and quantitative (survey) methods, cognitive and decision-making theories (means-end chain) and competitive theories to identify the salient drivers explaining senior hotel choice and its influence on their decision-making. The target population are repeated seniors’ age 65 years and over who are UK resident, and from the top tourist market to the UK (USA, Germany, and France). Structural equation modelling will be employed to analyze the datasets. The theoretical implication is the development of new concepts using a robust research design, and as well as advancing existing framework to hotel study. Practically, it will provide the hotel management with the latest information to design a competitive marketing strategy and activities to target the mature market post-pandemic and over a long period.

Keywords: competitive advantage, covid-19, full-service hotel, five-star, luxury hotels

Procedia PDF Downloads 105
97 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.

Keywords: augmented reality, cultural heritage, GPS, serious game

Procedia PDF Downloads 69
96 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 110
95 Barriers to Business Model Innovation in the Agri-Food Industry

Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad

Abstract:

The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.

Keywords: agri-food, barriers, business model, innovation

Procedia PDF Downloads 198
94 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies

Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam

Abstract:

Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.

Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company

Procedia PDF Downloads 242
93 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 18
92 The Greek Revolution Through the Foreign Press. The Case of the Newspaper "The London Times" In the Period 1821-1828

Authors: Euripides Antoniades

Abstract:

In 1821 the Greek Revolution movement, under the political influence that arose from the French revolution, and the corresponding movements in Italy, Germany and America, requested the liberation of the nation and the establishment of an independent national state. Published topics in the British press regarding the Greek Revolution, focused on : a) the right of the Greeks to claim their freedom from Turkish domination in order to establish an independent state based on the principle of national autonomy, b) criticism regarding Turkish rule as illegal and the power of the Ottoman Sultan as arbitrary, c) the recognition of the Greek identity and its distinction from the Turkish one and d) the endorsement Greeks as the descendants of ancient Greeks. The advantage of newspaper as a media is sharing information and ideas and dealing with issues in greater depth and detail, unlike other media, such as radio or television. The London Times is a print publication that presents, in chronological or thematic order, the news, opinions or announcements about the most important events that have occurred in a place during a specified period of time. This paper employs the rich archive of The London Times archive by quoting extracts from publications of that period, to convey the British public perspective regarding the Greek Revolution from its beginning until the London Protocol of 1828. Furthermore, analyses the publications of the British newspaper in terms of the number of references to the Greek revolution, front page and editorial references as well as the size of publications on the revolution during the period 1821-1828. A combination of qualitative and quantitative content analysis was applied. An attempt was made to record Greek Revolution references along with the usage of specific words and expressions that contribute to the representation of the historical events and their exposure to the reading public. Key finds of this research reveal that a) there was a frequency of passionate daily articles concerning the events in Greece, their length, and context in The London Times, b) the British public opinion was influenced by this particular newspaper and c) the newspaper published various news about the revolution by adopting the role of animator of the Greek struggle. For instance, war events and the battles of Wallachin and Moldavia, Hydra, Crete, Psara, Mesollogi, Peloponnese were presented not only for informing the readers but for promoting the essential need for freedom and the establishment of an independent Greek state. In fact, this type of news was the main substance of the The London Times’ structure, establishing a positive image about the Greek Revolution contributing to the European diplomatic development such as the standpoint of France, - that did not wish to be detached from the conclusions regarding the English loans and the death of Alexander I of Russia and his succession by the ambitious Nicholas. These factors offered a change in the attitude of the British and Russians respectively assuming a positive approach towards Greece. The Great Powers maintained a neutral position in the Greek-Ottoman conflict, same time they engaged in Greek power increasement by offering aid.

Keywords: Greece, revolution, newspaper, the London times, London, great britain, mass media

Procedia PDF Downloads 69
91 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 156
90 Nanoparticle Supported, Magnetically Separable Metalloporphyrin as an Efficient Retrievable Heterogeneous Nanocatalyst in Oxidation Reactions

Authors: Anahita Mortazavi Manesh, Mojtaba Bagherzadeh

Abstract:

Metalloporphyrins are well known to mimic the activity of monooxygenase enzymes. In this regard, metalloporphyrin complexes have been largely employed as valuable biomimetic catalysts, owing to the critical roles they play in oxygen transfer processes in catalytic oxidation reactions. Investigating in this area is based on different strategies to design selective, stable and high turnover catalytic systems. Immobilization of expensive metalloporphyrin catalysts onto supports appears to be a good way to improve their stability, selectivity and the catalytic performance because of the support environment and other advantages with respect to recovery, reuse. In other words, supporting metalloporphyrins provides a physical separation of active sites, thus minimizing catalyst self-destruction and dimerization of unhindered metalloporphyrins. Furthermore, heterogeneous catalytic oxidations have become an important target since their process are used in industry, helping to minimize the problems of industrial waste treatment. Hence, the immobilization of these biomimetic catalysts is much desired. An attractive approach is the preparation of the heterogeneous catalyst involves immobilization of complexes on silica coated magnetic nano-particles. Fe3O4@SiO2 magnetic nanoparticles have been studied extensively due to their superparamagnetism property, large surface area to volume ratio and easy functionalization. Using heterogenized homogeneous catalysts is an attractive option to facile separation of catalyst, simplified product work-up and continuity of catalytic system. Homogeneous catalysts immobilized on magnetic nanoparticles (MNPs) surface occupy a unique position due to combining the advantages of both homogeneous and heterogeneous catalysts. In addition, superparamagnetic nature of MNPs enable very simple separation of the immobilized catalysts from the reaction mixture using an external magnet. In the present work, an efficient heterogeneous catalyst was prepared by immobilizing manganese porphyrin on functionalized magnetic nanoparticles through the amino propyl linkage. The prepared catalyst was characterized by elemental analysis, FT-IR spectroscopy, X-ray powder diffraction, atomic absorption spectroscopy, UV-Vis spectroscopy, and scanning electron microscopy. Application of immobilized metalloporphyrin in the oxidation of various organic substrates was explored using Gas chromatographic (GC) analyses. The results showed that the supported Mn-porphyrin catalyst (Fe3O4@SiO2-NH2@MnPor) is an efficient and reusable catalyst in oxidation reactions. Our catalytic system exhibits high catalytic activity in terms of turnover number (TON) and reaction conditions. Leaching and recycling experiments revealed that nanocatalyst can be recovered several times without loss of activity and magnetic properties. The most important advantage of this heterogenized catalytic system is the simplicity of the catalyst separation in which the catalyst can be separated from the reaction mixture by applying a magnet. Furthermore, the separation and reuse of the magnetic Fe3O4 nanoparticles were very effective and economical.

Keywords: Fe3O4 nanoparticle, immobilized metalloporphyrin, magnetically separable nanocatalyst, oxidation reactions

Procedia PDF Downloads 278
89 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 122
88 Masstige and the New Luxury: An Exploratory Study on Cosmetic Brands Among Black African Woman

Authors: Melanie Girdharilall, Anjli Himraj, Shivan Bhagwandin, Marike Venter De Villiers

Abstract:

The allure of luxury has long been attractive, fashionable, mystifying, and complex. As globalisation and the popularity of social media continue to evolve, consumers are seeking status products. However, in emerging economies like South Africa, where 60% of the country lives in poverty, this desire is often far-fetched and out of reach to most of the consumers. As a result, luxury brands are introducing masstige products: products that are associated with luxury and status but within financial reach to the middle-class consumer. The biggest challenge that this industry faces is the lack of knowledge and expertise on black female’s hair composition and offering products that meet their intricate requirements. African consumers have unique hair types, and global brands often do not accommodate for the complex nature of their hair and their product needs. By gaining insight into this phenomenon, global cosmetic brands can benefit from brand expansion, product extensions, increased brand awareness, brand knowledge, and brand equity. The purpose of this study is to determine how cosmetic brands can leverage the concept of masstige products to cater to the needs of middle-income black African woman. This study explores the 18- to 35-year-old black female cohort, which comprises approximately 17% of the South African population. The black hair care industry in Africa is expected a 6% growth rate over the next 5 years. The study is grounded in Paul’s (2019) 3-phase model for masstige marketing. This model demonstrates that product, promotion, and place strategies play a significant role in masstige value creation and the impact of these strategies on the branding dimensions (brand trust, brand association, brand positioning, brand preference, etc.).More specifically, this theoretical framework encompasses nine stages, or dimensions, that are of critical importance to companies who plan to infiltrate the masstige market. In short, the most critical components to consider are the positioning of the product and its competitive advantage in comparison to competitors. Secondly, advertising appeals and use of celebrities, and lastly, distribution channels such as online or in-store while maintain the exclusivity of the brand. By means of an exploratory study, a qualitative approach was undertaken, and focus groups were conducted among black African woman. The focus groups were voice recorded, transcribed, and analysed using Atlas software. The main themes were identified and used to provide brands with insight and direction for developing a comprehensive marketing mix for effectively entering the masstige market. The findings of this study will provide marketing practitioners with in-depth insight into how to effectively position masstige brands in line with consumer needs. It will give direction to both existing and new brands aiming to enter this market, by giving a comprehensive marketing mix for targeting the growing black hair care industry in Africa.

Keywords: africa, masstige, cosmetics, hard care, black females

Procedia PDF Downloads 69
87 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 45
86 Office Workspace Design for Policewomen in Assam, India: Applications for Developing Countries

Authors: Shilpi Bora, Abhirup Chatterjee, Debkumar Chakrabarti

Abstract:

Organizations of all the sectors around the world are increasingly revisiting their workplace strategies with due concern for women working therein. Limited office space and rigid work arrangements contribute to lesser job satisfaction and greater work impoundments for any organization. Flexible workspace strategies are indispensable to accommodate the progressive rise of modular workstations and involvement of women. Today’s generation of employees deserves malleable office environments with employee-friendly job conditions and strategies. The workplace nowadays stands on rapid organizational changes in progressive and flexible work culture. Occupational well-being practices need to keep pace with the rapid changes in office-based work. Working at the office (workspace) with awkward postures or for long periods can cause pain, discomfort, and injury. The world is stirring towards the era of globalization and progress. The 4000 women police personnel constitute less than one per cent of the total police strength of India. Lots of innovative fields are growing fast, and it is important that we should accommodate women in those arenas. The timeworn trends should be set apart to set out for fresh opportunities and possibilities of development and success through more involvement of women in the workplace. The notion of women policing is gaining position throughout the world, and various countries are putting solemn efforts to mainstream women in policing. As the role of women policing in a society is budding, and thus it is also notable that the accessibility of women at general police stations should be considered. Accordingly, the impact of workspace at police station on the employee productivity has been widely deliberated as a crucial contributor to employee satisfaction leading to better functional motivation. Thus the present research aimed to look into the office workstation design of police station with reference to womanhood specific issues to uplift occupational wellbeing of the policewomen. Personal interview and individual responses collected through administering to a subjective assessment questionnaire on thirty women police as well as to have their views on these issues by purposive non-probability sampling of women police personnel of different ranks posted in Guwahati, Assam, India. Scrutiny of the collected data revealed that office design has a substantial impact on the policewomen job satisfaction in the police station. In this study, the workspace was designed in such a way that the set of factors would impact on the individual to ensure increased productivity. Office design such as furniture, noise, temperature, lighting and spatial arrangement were considered. The primary feature which affected the productivity of policewomen was the furniture used in the workspace, which was found to disturb the everyday and overall productivity of policewomen. Therefore, it was recommended to have proper and adequate ergonomics design intervention to improve the office design for better performance. This type of study is today’s need-of-the-hour to empower women and facilitate their inner talent to come up in service of the nation. The office workspace design also finds critical importance at several other occupations also – where office workstation needs further improvement.

Keywords: office workspace design, policewomen, womanhood concerns at workspace, occupational wellbeing

Procedia PDF Downloads 209
85 Cultural Competence in Palliative Care

Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam

Abstract:

Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.

Keywords: cultural competence, end-of-life care, hospice, palliative care

Procedia PDF Downloads 54
84 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem

Authors: Rupak Kumar Sarma, Ratul Saikia

Abstract:

Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.

Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification

Procedia PDF Downloads 150
83 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges

Authors: Rafat Fazeli, Reza Fazeli

Abstract:

This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.

Keywords: the welfare state, social wage, The United States, limits to growth

Procedia PDF Downloads 189
82 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 50
81 Gas Systems of the Amadeus Basin, Australia

Authors: Chris J. Boreham, Dianne S. Edwards, Amber Jarrett, Justin Davies, Robert Poreda, Alex Sessions, John Eiler

Abstract:

The origins of natural gases in the Amadeus Basin have been assessed using molecular and stable isotope (C, H, N, He) systematics. A dominant end-member thermogenic, oil-associated gas is considered for the Ordovician Pacoota−Stairway sandstones of the Mereenie gas and oil field. In addition, an abiogenic end-member is identified in the latest Proterozoic lower Arumbera Sandstone of the Dingo gasfield, being most likely associated with radiolysis of methane with polymerisation to wet gases. The latter source assignment is based on a similar geochemical fingerprint derived from the laboratory gamma irradiation experiments on methane. A mixed gas source is considered for the Palm Valley gasfield in the Ordovician Pacoota Sandstone. Gas wetness (%∑C₂−C₅/∑C₁−C₅) decreases in the order Mereenie (19.1%) > Palm Valley (9.4%) > Dingo (4.1%). Non-produced gases at Magee-1 (23.5%; Late Proterozoic Heavitree Quartzite) and Mount Kitty-1 (18.9%; Paleo-Mesoproterozoic fractured granitoid basement) are very wet. Methane thermometry based on clumped isotopes of methane (¹³CDH₃) is consistent with the abiogenic origin for the Dingo gas field with methane formation temperature of 254ᵒC. However, the low methane formation temperature of 57°C for the Mereenie gas suggests either a mixed thermogenic-biogenic methane source or there is no thermodynamic equilibrium between the methane isotopomers. The shallow reservoir depth and present-day formation temperature below 80ᵒC would support microbial methanogenesis, but there is no accompanying alteration of the C- and H-isotopes of the wet gases and CO₂ that is typically associated with biodegradation. The Amadeus Basin gases show low to extremely high inorganic gas contents. Carbon dioxide is low in abundance (< 1% CO₂) and becomes increasing depleted in ¹³C from the Palm Valley (av. δ¹³C 0‰) to the Mereenie (av. δ¹³C -6.6‰) and Dingo (av. δ¹³C -14.3‰) gas fields. Although the wide range in carbon isotopes for CO₂ is consistent with multiple origins from inorganic to organic inputs, the most likely process is fluid-rock alteration with enrichment in ¹²C in the residual gaseous CO₂ accompanying progressive carbonate precipitation within the reservoir. Nitrogen ranges from low−moderate (1.7−9.9% N₂) abundance (Palm Valley av. 1.8%; Mereenie av. 9.1%; Dingo av. 9.4%) to extremely high abundance in Magee-1 (43.6%) and Mount Kitty-1 (61.0%). The nitrogen isotopes for the production gases have δ¹⁵N = -3.0‰ for Mereenie, -3.0‰ for Palm Valley and -7.1‰ for Dingo, suggest all being mixed inorganic and thermogenic nitrogen sources. Helium (He) abundance varies over a wide range from a low of 0.17% to one of the world’s highest at 9% (Mereenie av. 0.23%; Palm Valley av. 0.48%, Dingo av. 0.18%, Magee-1 6.2%; Mount Kitty-1 9.0%). Complementary helium isotopes (R/Ra = ³He/⁴Hesample / ³He/⁴Heair) range from 0.013 to 0.031 R/Ra, indicating a dominant crustal origin for helium with a sustained input of radiogenic 4He from the decomposition of U- and Th-bearing minerals, effectively diluting any original mantle helium input. The high helium content in the non-produced gases compared to the shallower producing wells most likely reflects their stratigraphic position relative to the Tonian Bitter Springs Group with the former below and the latter above an effective carbonate-salt seal.

Keywords: amadeus gas, thermogenic, abiogenic, C, H, N, He isotopes

Procedia PDF Downloads 172
80 Project Management and International Development: Competencies for International Assignment

Authors: M. P. Leroux, C. Coulombe

Abstract:

Projects are popular vehicles through which international aid is delivered in developing countries. To achieve their objectives, many northern organizations develop projects with local partner organizations in the developing countries through technical assistance projects. International aid and international development projects precisely have long been criticized for poor results although billions are spent every year. Little empirical research in the field of project management has the focus on knowledge transfer in international development context. This paper focuses particularly on personal dimensions of international assignees participating in project within local team members in the host country. We propose to explore the possible links with a human resource management perspective in order to shed light on the less research problematic of knowledge transfer in development cooperation projects. The process leading to capacity building being far complex, involving multiple dimensions and far from being linear, we propose here to assess if traditional research on expatriate in multinational corporations pertain to the field of project management in developing countries. The following question is addressed: in the context of international development project cooperation, what personal determinants should the selection process focus when looking to fill a technical assistance position in a developing country? To answer that question, we first reviewed the literature on expatriate in the context of inter organizational knowledge transfer. Second, we proposed a theoretical framework combining perspectives of development studies and management to explore if parallels can be draw between traditional international assignment and technical assistance project assignment in developing countries. We conducted an exploratory study using case studies from technical assistance initiatives led in Haiti, a country in Central America. Data were collected from multiple sources following qualitative study research methods. Direct observations in the field were allowed by local leaders of six organization; individual interviews with present and past international assignees, individual interview with local team members, and focus groups were organized in order to triangulate information collected. Contrary from empirical research on knowledge transfer in multinational corporations, results tend to show that technical expertise rank well behind many others characteristics. Results tend to show the importance of soft skills, as a prerequisite to succeed in projects where local team have to collaborate. More importantly, international assignees who were talking knowledge sharing instead of knowledge transfer seemed to feel more satisfied at the end of their mandate than the others. Reciprocally, local team members who perceived to have participated in a project with an expat looking to share instead of aiming to transfer knowledge seemed to describe the results of project in more positive terms than the others. Results obtained from this exploratory study open the way for a promising research agenda in the field of project management. It emphasises the urgent need to achieve a better understanding on the complex set of soft skills project managers or project chiefs would benefit to develop, in particular, the ability to absorb knowledge and the willingness to share one’s knowledge.

Keywords: international assignee, international project cooperation, knowledge transfer, soft skills

Procedia PDF Downloads 121
79 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 114
78 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 202
77 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 171
76 A Semi-supervised Classification Approach for Trend Following Investment Strategy

Authors: Rodrigo Arnaldo Scarpel

Abstract:

Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.

Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation

Procedia PDF Downloads 60
75 Enhancing Police Accountability through the Malawi Independent Police Complaints Commission: Prospects and Challenges That Lie Ahead

Authors: Esther Gumboh

Abstract:

The police play a critical role in society and are an integral aspect of the rule of law. Equally, respect for human rights is an integral part of professional policing. In view of the vast powers that the police enjoy and the attendant risk of abuse and resulting human rights violations, the need for police accountability and civilian police oversight is internationally and regionally recognised. Policing oversight springs from the duty to investigate human rights violations. Those implicated in perpetrating or covering up violations must be disciplined or prosecuted to ensure effective accountability. Police accountability is particularly important in Malawi given the dark history of policing in the country during the 30-year dictatorial era under President Kamuzu Banda. Described as one of the most repressive regimes in Africa, the Banda administration was characterised by gross state-sponsored violence, repressive policing and human rights violations. Indeed, the police were involved in various forms of human rights abuse including arbitrary arrests and unlawful detentions, torture, and excessive use of force in conducting arrests and public order policing. This situation flourished within a culture of police impunity bolstered in part by the absence of clear oversight mechanisms for police accountability. In turn, there was immense public mistrust of the police. Unsurprisingly, the criminal justice system was one of the priority areas for reform when Malawi adopted its first democratic Constitution in 1994. Section 153 of the Constitution envisions a police service that is, for all intents and purposes, there to provide for the protection of public safety and the rights of persons in Malawi according to the prescriptions of the Constitution and any other law. This position reflects the view that the duty to protect and promote human rights is not incompatible with effective policing. Despite this, the police continue to engage in questionable behaviour in public order policing, excessive use of force, deaths in police custody, ill-treatment, torture and other forms of abuse including sexual abuse. Perpetrators of abuses are occasionally punished, but investigations are often delayed, abandoned, or remain inconclusive. Police accountability remains largely elusive. Commendably, the law does subject the police to significant oversight both internally and externally. However, until 2010, Malawi lacked a wholly independent civilian oversight mechanism specifically mandated to monitor the activities of the Malawi Police Service and held it accountable. This void has since been filled by the Independent Complaints Commission established under the Police Act. This is a positive development that reiterates Malawi’s commitment to the investigation of human rights violations by the police and to ending police impunity. This contribution examines the legal framework for this Commission to project the effectiveness of the Commission. While the framework looks promising on various fronts, there are potential challenges that lie ahead. Malawi must pre-emptively deal with these challenges carefully if the Commission is to have any practical significance in transforming police accountability in the country. Drawing on lessons from other jurisdictions like South Africa, the paper makes recommendations for legislative reform to strengthen the Commission’s framework.

Keywords: civilian policing oversight, Malawi, police, police accountability, policing, policing oversight

Procedia PDF Downloads 202
74 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 131
73 Video Club as a Pedagogical Tool to Shift Teachers’ Image of the Child

Authors: Allison Tucker, Carolyn Clarke, Erin Keith

Abstract:

Introduction: In education, the determination to uncover privileged practices requires critical reflection to be placed at the center of both pre-service and in-service teacher education. Confronting deficit thinking about children’s abilities and shifting to holding an image of the child as capable and competent is necessary for teachers to engage in responsive pedagogy that meets children where they are in their learning and builds on strengths. This paper explores the ways in which early elementary teachers' perceptions of the assets of children might shift through the pedagogical use of video clubs. Video club is a pedagogical practice whereby teachers record and view short videos with the intended purpose of deepening their practices. The use of video club as a learning tool has been an extensively documented practice. In this study, a video club is used to watch short recordings of playing children to identify the assets of their students. Methodology: The study on which this paper is based asks the question: What are the ways in which teachers’ image of the child and teaching practices evolve through the use of video club focused on the strengths of children demonstrated during play? Using critical reflection, it aims to identify and describe participants’ experiences of examining their personally held image of the child through the pedagogical tool video club, and how that image influences their practices, specifically in implementing play pedagogy. Teachers enrolled in a graduate-level play pedagogy course record and watch videos of their own students as a means to notice and reflect on the learning that happens during play. Using a co-constructed viewing protocol, teachers identify student strengths and consider their pedagogical responses. Video club provides a framework for teachers to critically reflect in action, return to the video to rewatch the children or themselves and discuss their noticings with colleagues. Critical reflection occurs when there is focused attention on identifying the ways in which actions perpetuate or challenge issues of inherent power in education. When the image of the child held by the teacher is from a deficit position and is influenced by hegemonic dimensions of practice, critical reflection is essential in naming and addressing power imbalances, biases, and practices that are harmful to children and become barriers to their thriving. The data is comprised of teacher reflections, analyzed using phenomenology. Phenomenology seeks to understand and appreciate how individuals make sense of their experiences. Teacher reflections are individually read, and researchers determine pools of meaning. Categories are identified by each researcher, after which commonalities are named through a recursive process of returning to the data until no more themes emerge or saturation is reached. Findings: The final analysis and interpretation of the data are forthcoming. However, emergent analysis of the data collected using teacher reflections reveals the ways in which the use of video club grew teachers’ awareness of their image of the child. It shows video club as a promising pedagogical tool when used with in-service teachers to prompt opportunities for play and to challenge deficit thinking about children and their abilities to thrive in learning.

Keywords: asset-based teaching, critical reflection, image of the child, video club

Procedia PDF Downloads 76
72 Removal of VOCs from Gas Streams with Double Perovskite-Type Catalyst

Authors: Kuan Lun Pan, Moo Been Chang

Abstract:

Volatile organic compounds (VOCs) are one of major air contaminants, and they can react with nitrogen oxides (NOx) in atmosphere to form ozone (O3) and peroxyacetyl nitrate (PAN) with solar irradiation, leading to environmental hazards. In addition, some VOCs are toxic at low concentration levels and cause adverse effects on human health. How to effectively reduce VOCs emission has become an important issue. Thermal catalysis is regarded as an effective way for VOCs removal because it provides oxidation route to successfully convert VOCs into carbon dioxide (CO2) and water (H2O(g)). Single perovskite-type catalysts are promising for VOC removal, and they are of good potential to replace noble metals due to good activity and high thermal stability. Single perovskites can be generally described as ABO3 or A2BO4, where A-site is often a rare earth element or an alkaline. Typically, the B-site is transition metal cation (Fe, Cu, Ni, Co, or Mn). Catalytic properties of perovskites mainly rely on nature, oxidation states and arrangement of B-site cation. Interestingly, single perovskites could be further synthesized to form double perovskite-type catalysts which can simply be represented by A2B’B”O6. Likewise, A-site stands for an alkaline metal or rare earth element, and the B′ and B′′ are transition metals. Double perovskites possess unique surface properties. In structure, three-dimensional of B-site with ordered arrangement of B’O6 and B”O6 is presented alternately, and they corner-share octahedral along three directions of the crystal lattice, while cations of A-site position between the void of octahedral. It has attracted considerable attention due to specific arrangement of alternating B-site structure. Therefore, double perovskites may have more variations than single perovskites, and this greater variation may promote catalytic performance. It is expected that activity of double perovskites is higher than that of single perovskites toward VOC removal. In this study, double perovskite-type catalyst (La2CoMnO6) is prepared and evaluated for VOC removal. Also, single perovskites including LaCoO3 and LaMnO3 are tested for the comparison purpose. Toluene (C7H8) is one of the important VOCs which are commonly applied in chemical processes. In addition to its wide application, C7H8 has high toxicity at a low concentration. Therefore, C7H8 is selected as the target compound in this study. Experimental results indicate that double perovskite (La2CoMnO6) has better activity if compared with single perovskites. Especially, C7H8 can be completely oxidized to CO2 at 300oC as La2CoMnO6 is applied. Characterization of catalysts indicates that double perovskite has unique surface properties and is of higher amounts of lattice oxygen, leading to higher activity. For durability test, La2CoMnO6 maintains high C7H8 removal efficiency of 100% at 300oC and 30,000 h-1, and it also shows good resistance to CO2 (5%) and H2O(g) (5%) of gas streams tested. For various VOCs including isopropyl alcohol (C3H8O), ethanal (C2H4O), and ethylene (C2H4) tested, as high as 100% efficiency could be achieved with double perovskite-type catalyst operated at 300℃, indicating that double perovskites are promising catalysts for VOCs removal, and possible mechanisms will be elucidated in this paper.

Keywords: volatile organic compounds, Toluene (C7H8), double perovskite-type catalyst, catalysis

Procedia PDF Downloads 138
71 Valuing Academic Excellence in Higher Education: The Case of Establishing a Human Development Unit in a European Start-up University

Authors: Eleftheria Atta, Yianna Vovides, Marios Katsioloudes

Abstract:

In the fusion of neoliberalism and globalization, Higher Education (HE) is becoming increasingly complex. The changing patterns of the economy worldwide caused the development of high value-added economy HE has been viewed as a social investment, significant for the development of knowledge-based societies and economies. In order to contribute to economic competitiveness universities are required to produce local and employable workers in order to fit into the neoliberal economic environment. The emergence of neoliberal performativity, which measures outcomes, is a key aspect in a neoliberal era. It facilitates the redesign of institutions making organizations and individuals to think about themselves in relation to their performance. Performativity and performance management systems lead academics to become more effective, professionally advance, improve and become better than others and therefore act competitively. Besides the aforementioned complexities, universities also encounter the challenge of maintaining a set of values to guide an institution’s actions and which have always been highly respected in developing a HE institution. The formulation of a clear set of values also determines the institutional culture which will be maintained. It is evident that values create a significant framework for the workplace and may determine positive institutional results. Universities are required to engage in activities for capacity building which will improve their students’ competence as well as offer opportunities to administrative and academic staff to professionally develop in light of neoliberal performativity. Additionally, the University is now considered as an innovation ecosystem playing a significant role in providing education, research and innovation to help create solutions to meet social, environmental and economic challenges. Thus, Universities become central in orchestrating multi-actor innovation networks. This presentation will discuss the establishment of an institutional unit entitled ‘Human Development Unit’ (HDU) in a European start-up university. The activities of the HDU are envisioned as drivers for innovation that would enable the university as a whole to maintain its position in a fast-changing world and be ready to face adaptive challenges. In addition, the HDU provides its students, staff, and faculty with opportunities to advance their academic and professional development through engagement in programs that align with institutional values. It also serves as a connector with the broader community. The presentation will highlight the functions of three centers which the unit will coordinate namely, the Student Development Center (SDC), the Faculty & Staff Development Center (FSDC) and the Continuing Education Center (CEC). The presentation aligns with the aim of the conference as it welcomes presentations to discuss innovations and challenges encountered in HE. Particularly, this presentation seeks to discuss the establishment of an innovative unit at a start-up university which will contribute to creating an institutional culture shaped by the value of academic excellence for students as well as for staff, shaping and defining the functions and activities of the unit. The establishment of the proposed unit is crucial in a start-up university both to differentiate from other competitors but also to sustain its presence given the pressures in a neoliberal HE context.

Keywords: academic excellence, globalization, human development unit, neoliberalism

Procedia PDF Downloads 113