Search results for: value capture
559 Positive Effect of Manipulated Virtual Kinematic Intervention in Individuals with Traumatic Stiff Shoulder: Pilot Study
Authors: Isabella Schwartz, Ori Safran, Naama Karniel, Michal Abel, Adina Berko, Martin Seyres, Tamir Tsoar, Sigal Portnoy
Abstract:
Virtual Reality allows to manipulate the patient’s perception, thereby providing a motivational addition to real-time biofeedback exercises. We aimed to test the effect of manipulated virtual kinematic intervention on measures of active and passive Range of Motion (ROM), pain, and disability level in individuals with traumatic stiff shoulder. In a double-blinded study, patients with stiff shoulder following proximal humerus fracture and non-operative treatment were randomly divided into a non-manipulated feedback group (NM-group; N=6) and a manipulated feedback group (M-group; N=7). The shoulder ROM, pain, and the Disabilities of the Arm, Shoulder and Hand (DASH) scores were tested at baseline and after the 6 sessions, during which the subjects performed shoulder flexion and abduction in front of a graphic visualization of the shoulder angle. The biofeedback provided to the NM-group was the actual shoulder angle and the feedback provided to the M-group was manipulated so that 10° were constantly subtracted from the actual angle detected by the motion capture system. The M-group showed greater improvement in the active flexion ROM, with median and interquartile range of 197.1 (140.5-425.0) compared to 142.5 (139.1-151.3) for the NM-group (p=.046). Also, the M-group showed greater improvement in the DASH scores, with median and interquartile range of 67.7 (52.8-86.2) compared to 89.7 (83.8-98.3) for the NM-group (p=.022). Manipulated intervention is beneficial in individuals with traumatic stiff shoulder and should be further tested for other populations with orthopedic injuries.Keywords: virtual reality, biofeedback, shoulder pain, range of motion
Procedia PDF Downloads 125558 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 111557 Efficient Chess Board Representation: A Space-Efficient Protocol
Authors: Raghava Dhanya, Shashank S.
Abstract:
This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.Keywords: chess, optimisation, encoding, bit manipulation
Procedia PDF Downloads 50556 Planning for Sustainability in the Built Environment
Authors: Adedayo Jeremiah Adeyekun, Samuel Oluwagbemiga Ishola
Abstract:
This paper aimed to identify the significance of sustainability in the built environment, the economic and environmental importance to building and construction projects. Sustainability in the built environment has been a key objective of research over the past several decades. Sustainability in the built environment requires reconciliation between economic, environmental and social impacts of design and planning decisions made during the life cycle of a project from inception to termination. Planning for sustainability in the built environment needs us to go beyond our individual disciplines to consider the variety of economic, social and environmental impacts of our decisions in the long term. A decision to build a green residential development in an isolated location may pass some of the test of sustainability through its reduction in stormwater runoff, energy efficiency, and ecological sustainability in the building, but it may fail to be sustainable from a transportation perspective. Sustainability is important to the planning, design, construction, and preservation of the built environment; because it helps these activities reflect multiple values and considerations. In fact, the arts and sciences of the built environment have traditionally integrated values and fostered creative expression, capabilities that can and should lead the sustainability movement as society seeks ways to live in dynamic balance with its own diverse needs and the natural world. This research aimed to capture the state-of-the-art in the development of innovative sustainable design and planning strategies for building and construction projects. Therefore, there is a need for a holistic selection and implication approach for identifying potential sustainable strategies applicable to a particular project and evaluating the overall life cycle impact of each alternative by accounting for different applicable impacts and making the final selection among various viable alternatives.Keywords: sustainability, built environment, planning, design, construction
Procedia PDF Downloads 177555 Managerial Encouragement, Organizational Encouragement, and Resource Sufficiency and Its Effect on Creativity as Perceived by Architects in Metro Manila
Authors: Ferdinand de la Paz
Abstract:
In highly creative environments such as in the business of architecture, business models exhibit more focus on the traditional practice of mainstream design consultancy services as mandated and constrained by existing legislation. Architectural design firms, as business units belonging to the creative industries, have long been provoked to innovate not only in terms of their creative outputs but, more significantly, in the way they create and capture value from what they do. In the Philippines, there is still a dearth of studies exploring organizational creativity within the context of architectural firm practice, let alone across other creative industries. The study sought to determine the effects, measure the extent, and assess the relationships of managerial encouragement, organizational encouragement, and resource sufficiency on creativity as perceived by architects. A survey questionnaire was used to gather data from 100 respondents. The analysis was done using descriptive statistics, correlational, and causal-explanatory methods. The findings reveal that there is a weak positive relationship between Managerial Encouragement (ME), Organizational Encouragement (OE), and Sufficient Resources (SR) toward Creativity (C). The study also revealed that while Organizational Creativity and Sufficient Resources have significant effects on Creativity, Managerial Encouragement does not. It is recommended that future studies with a larger sample size be pursued among architects holding top management positions in architectural design firms to further validate the findings of this research. It is also highly recommended that the other stimulant scales in the KEYS framework be considered in future studies covering other locales to generate a better understanding of the architecture business landscape in the Philippines.Keywords: managerial encouragement, organizational encouragement, resource sufficiency, organizational creativity, architecture firm practice, creative industries
Procedia PDF Downloads 90554 A Study on Employer Branding and Its Impact on Employee
Authors: Kvnkc Sharma
Abstract:
Globalization, coupled with increase in competition is compelling organizations to adopt innovative strategies and identify core competencies in order to distinguish themselves from the competition. The capability of an organization is no longer determined by their products or services alone. The intellectual assets and quality of the human resource are fast emerging as key differentiators. Corporations are now positioning themselves as ‘brands’ not solely to market their products and services, but also to lure and to retain the best talent in the business. This paper identifies leadership as the ‘key element’ in developing an organization’s brand, which has a significant influence on the employee’s eventual perception of this external brand as portrayed by the organization. External branding incorporates innovation, consumer concern, trust, quality and sustainability. The paper contends that employees are indeed an organization’s ‘brand ambassadors. Internal branding involves taking care of these ambassadors of corporate brand i.e. human resource. If employees of an organization are not exposed to the organization’s branding (an ongoing process that functionally aligns, motivates and empower employees at all levels to consistently provide a satisfying customer experience), the external brand could be jeopardized. Internal branding, on the other hand, refers to employee’s perception of the organization’s brand. The current business environment can at best, be termed as volatile. Employees with the right technical and behavioral skills remain a scarce resource and the employers need to be ready to capture the attention, interest and commitment of the best and brightest candidates. This paper attempts to review and understand the relationship between employer branding and employee retention. The paper also seeks to identify potential impact of employer branding across all the factors affecting employees.Keywords: external branding, human resource, internal branding, leadership
Procedia PDF Downloads 251553 Numerical Simulation of Two-Phase Flows Using a Pressure-Based Solver
Authors: Lei Zhang, Jean-Michel Ghidaglia, Anela Kumbaro
Abstract:
This work focuses on numerical simulation of two-phase flows based on the bi-fluid six-equation model widely used in many industrial areas, such as nuclear power plant safety analysis. A pressure-based numerical method is adopted in our studies due to the fact that in two-phase flows, it is common to have a large range of Mach numbers because of the mixture of liquid and gas, and density-based solvers experience stiffness problems as well as a loss of accuracy when approaching the low Mach number limit. This work extends the semi-implicit pressure solver in the nuclear component CUPID code, where the governing equations are solved on unstructured grids with co-located variables to accommodate complicated geometries. A conservative version of the solver is developed in order to capture exactly the shock in one-phase flows, and is extended to two-phase situations. An inter-facial pressure term is added to the bi-fluid model to make the system hyperbolic and to establish a well-posed mathematical problem that will allow us to obtain convergent solutions with refined meshes. The ability of the numerical method to treat phase appearance and disappearance as well as the behavior of the scheme at low Mach numbers will be demonstrated through several numerical results. Finally, inter-facial mass and heat transfer models are included to deal with situations when mass and energy transfer between phases is important, and associated industrial numerical benchmarks with tabulated EOS (equations of state) for fluids are performed.Keywords: two-phase flows, numerical simulation, bi-fluid model, unstructured grids, phase appearance and disappearance
Procedia PDF Downloads 394552 Indian Christian View of God: Exploring Its Trajectory in 20th Century
Authors: James Ponniah
Abstract:
Christianity is the largest religious tradition of the world. What makes Christianity a world religion is its characteristics of universality and particularity. Its universality and particularity are closely interrelated. Its university is realized and embodied in its particularities and its particularity is recognized and legitimized through its universality. This paper focuses on the dimension of the particularity of Christianity in that it looks at the particularized ideas and discourses of Christian thinking in India in the 20th century and pays attention to the differing shifts and new shades of meaning in Indian Christian notion of God. Drawing upon the writings of select Indian theologians such as Brahmabandhab Upadhyaya, Sundar Sing, A.J Appasamy, Raymond Panikkar, Amalorpavadass and George Soares Prabhhu, this paper delves into how the contexts—be it personal, political, historical or ecclesial—bear upon the way Indian theologians have conceived and constructed the notion of God in their work. Focusing upon how they responded to the signs of their time through their theological narratives, the paper argues that the religion of Christianity can sustain its universality only when it translates its key notions such as God into indigenous categories and local idioms and thus makes itself relevant to the people among whom it is spread. Monotheistic God of Christianity has to accommodate plurality of expressions if Christian idea God has to capture and convey everyone’s experience of God. The case of Indian Christianity then reveals that a monolithic world religion will be experienced and recognised as truly universal only when it sheds its homogeneity and assumes a heterogeneous portrait through the acquisition of local idioms. Allowing culturally diverse idioms to influence theological categories is not inconsequential to—‘accommodating differences and accepting diversities,’ an issue we encounter within and beyond religious domains in our contemporary times.Keywords: concept of God, heterogeneity, Indian Christianity, indigenous categories
Procedia PDF Downloads 251551 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 408550 Designing Agricultural Irrigation Systems Using Drone Technology and Geospatial Analysis
Authors: Yongqin Zhang, John Lett
Abstract:
Geospatial technologies have been increasingly used in agriculture for various applications and purposes in recent years. Unmanned aerial vehicles (drones) fit the needs of farmers in farming operations, from field spraying to grow cycles and crop health. In this research, we conducted a practical research project that used drone technology to design and map optimal locations and layouts of irrigation systems for agriculture farms. We flew a DJI Mavic 2 Pro drone to acquire aerial remote sensing images over two agriculture fields in Forest, Mississippi, in 2022. Flight plans were first designed to capture multiple high-resolution images via a 20-megapixel RGB camera mounted on the drone over the agriculture fields. The Drone Deploy web application was then utilized to develop flight plans and subsequent image processing and measurements. The images were orthorectified and processed to estimate the area of the area and measure the locations of the water line and sprinkle heads. Field measurements were conducted to measure the ground targets and validate the aerial measurements. Geospatial analysis and photogrammetric measurements were performed for the study area to determine optimal layout and quantitative estimates for irrigation systems. We created maps and tabular estimates to demonstrate the locations, spacing, amount, and layout of sprinkler heads and water lines to cover the agricultural fields. This research project provides scientific guidance to Mississippi farmers for a precision agricultural irrigation practice.Keywords: drone images, agriculture, irrigation, geospatial analysis, photogrammetric measurements
Procedia PDF Downloads 77549 Geographic Information System (GIS) for Structural Typology of Buildings
Authors: Néstor Iván Rojas, Wilson Medina Sierra
Abstract:
Managing spatial information is described through a Geographic Information System (GIS), for some neighborhoods in the city of Tunja, in relation to the structural typology of the buildings. The use of GIS provides tools that facilitate the capture, processing, analysis and dissemination of cartographic information, product quality evaluation of the classification of buildings. Allows the development of a method that unifies and standardizes processes information. The project aims to generate a geographic database that is useful to the entities responsible for planning and disaster prevention and care for vulnerable populations, also seeks to be a basis for seismic vulnerability studies that can contribute in a study of urban seismic microzonation. The methodology consists in capturing the plat including road naming, neighborhoods, blocks and buildings, to which were added as attributes, the product of the evaluation of each of the housing data such as the number of inhabitants and classification, year of construction, the predominant structural systems, the type of mezzanine board and state of favorability, the presence of geo-technical problems, the type of cover, the use of each building, damage to structural and non-structural elements . The above data are tabulated in a spreadsheet that includes cadastral number, through which are systematically included in the respective building that also has that attribute. Geo-referenced data base is obtained, from which graphical outputs are generated, producing thematic maps for each evaluated data, which clearly show the spatial distribution of the information obtained. Using GIS offers important advantages for spatial information management and facilitates consultation and update. Usefulness of the project is recognized as a basis for studies on issues of planning and prevention.Keywords: microzonation, buildings, geo-processing, cadastral number
Procedia PDF Downloads 334548 Uruguayan vs. British Press Coverage of a Political Kidnapping
Authors: Luisa Peirano
Abstract:
What began as a middle-class insurgent political movement whose slogan was 'Words divide us. Action unites us!' ultimately mutated into an underground terrorist group that staged a series of armed robberies, kidnappings and even executions in the 1960s and early 1970s. One of the most memorable was the kidnapping of the British ambassador, Sir Geoffrey Jackson, in January 1971, who was held captive for eight months. The episode, which triggered a massive government response and resulted in the capture of the Tupamaros leaders, continued to have political repercussions decades later when Tupamaros leaders emerged from prison to re-enter mainstream Uruguayan politics. The kidnapping and its aftermath attracted intense media coverage in Uruguay and Britain, coverage that affected public opinion profoundly. The treatment by the Uruguayan and British medias’ diverged, however. Uruguayan newspapers focused on political issues, mirrored the positions of various political parties, and showed the larger context of social, cultural and political forces that rocked Latin America in the 1960s and early 1970s. By contrast, the British press limited its attention mainly to the human drama. On the 30th anniversary of Sir Geoffrey Jackson's death, this study compares over one hundred major newspaper articles and suggests some reasons for the differences between Uruguayan and British media treatment in terms of the volume, content, and perspective as well in the effect on readers. The differences have persisted and continue to matter in present day coverage of terrorism and its victims.Keywords: British Ambassador, Churchill Archives Centre, Sir Geoffrey Jackson, political kidnapping, Latin America in the 1960's, Tupamaro guerrillas, Uruguay
Procedia PDF Downloads 204547 [Keynote Talk]: Unlocking Transformational Resilience in the Aftermath of a Flood Disaster: A Case Study from Cumbria
Authors: Kate Crinion, Martin Haran, Stanley McGreal, David McIlhatton
Abstract:
Past research has demonstrated that disasters are continuing to escalate in frequency and magnitude worldwide, representing a key concern for the global community. Understanding and responding to the increasing risk posed by disaster events has become a key concern for disaster managers. An emerging trend within literature, acknowledges the need to move beyond a state of coping and reinstatement of the status quo, towards incremental adaptive change and transformational actions for long-term sustainable development. As such, a growing interest in research concerns the understanding of the change required to address ever increasing and unpredictable disaster events. Capturing transformational capacity and resilience, however is not without its difficulties and explains the dearth in attempts to capture this capacity. Adopting a case study approach, this research seeks to enhance an awareness of transformational resilience by identifying key components and indicators that determine the resilience of flood-affected communities within Cumbria. Grounding and testing a theoretical resilience framework within the case studies, permits the identification of how perceptions of risk influence community resilience actions. Further, it assesses how levels of social capital and connectedness impacts upon the extent of interplay between resources and capacities that drive transformational resilience. Thus, this research seeks to expand the existing body of knowledge by enhancing the awareness of resilience in post-disaster affected communities, by investigating indicators of community capacity building and resilience actions that facilitate transformational resilience during the recovery and reconstruction phase of a flood disaster.Keywords: capacity building, community, flooding, transformational resilience
Procedia PDF Downloads 289546 Digital Memory plus City Cultural Heritage: The Peking Memory Project Experience
Authors: Huiling Feng, Xiaoshuang Jia, Jihong Liang, Li Niu
Abstract:
Beijing, formerly romanized as Peking, is the capital of the People's Republic of China and the world's second most populous city proper and most populous capital city. Beijing is a noted historical and cultural whose city history dates back three millennia which is extremely rich in terms of cultural heritage. In 2012, a digital memory project led by Humanistic Beijing Studies Center in Renmin University of China started with the goal to build a total digital collection of knowledge assets about Beijing and represent Beijing memories in new fresh ways. The title of the entire project is ‘Peking Memory Project(PMP)’. The main goal is for safeguarding the documentary heritage and intellectual memory of Beijing, more specifically speaking, from the perspective of historical humanities and public participation, PMP will comprehensively applied digital technologies like digital capture, digital storage, digital process, digital presentation and digital communication to transform different kinds of cultural heritage of Beijing into digital formats that can be stored, re-organized and shared. These digital memories can be interpreted with a new perspective, be organized with a new theme, be presented in a new way and be utilized with a new need. Taking social memory as theoretical basis and digital technologies as tools, PMP is framed with ‘Two Sites and A Repository’. Two sites mean the special website(s) characterized by ‘professional’ and an interactive website characterized by ‘crowdsourcing’. A Repository means the storage pool used for safety long-time preservation of the digital memories. The work of PMP has ultimately helped to highlight the important role in safeguarding the documentary heritage and intellectual memory of Beijing.Keywords: digital memory, cultural heritage, digital technologies, peking memory project
Procedia PDF Downloads 177545 Economic Assessment of CO2-Based Methane, Methanol and Polyoxymethylene Production
Authors: Wieland Hoppe, Nadine Wachter, Stefan Bringezu
Abstract:
Carbon dioxide (CO2) utilization might be a promising way to substitute fossil raw materials like coal, oil or natural gas as carbon source of chemical production. While first life cycle assessments indicate a positive environmental performance of CO2-based process routes, a commercialization of CO2 is limited by several economic obstacles up to now. We, therefore, analyzed the economic performance of the three CO2-based chemicals methane and methanol as basic chemicals and polyoxymethylene as polymer on a cradle-to-gate basis. Our approach is oriented towards life cycle costing. The focus lies on the cost drivers of CO2-based technologies and options to stimulate a CO2-based economy by changing regulative factors. In this way, we analyze various modes of operation and give an outlook for the potentially cost-effective development in the next decades. Biogas, waste gases of a cement plant, and flue gases of a waste incineration plant are considered as CO2-sources. The energy needed to convert CO2 into hydrocarbons via electrolysis is assumed to be supplied by wind power, which is increasingly available in Germany. Economic data originates from both industrial processes and process simulations. The results indicate that CO2-based production technologies are not competitive with conventional production methods under present conditions. This is mainly due to high electricity generation costs and regulative factors like the German Renewable Energy Act (EEG). While the decrease in production costs of CO2-based chemicals might be limited in the next decades, a modification of relevant regulative factors could potentially promote an earlier commercialization.Keywords: carbon capture and utilization (CCU), economic assessment, life cycle costing (LCC), power-to-X
Procedia PDF Downloads 292544 Unveiling Subconscious Autopoietic Reflexive Feedback Mechanisms of Second Order Governance from the Narration of Cognitive Autobiography of an ICT Lab during the Digital Revolution
Authors: Gianni Jacucci
Abstract:
We present a retrospective on the development of a research group over the past 30+ years. We reflect on a change in observing the experience (1990-2024) of a university sociotechnical research group dedicated to instill change for innovation in client organisations and enterprises. Its cognitive and action trajectory is influenced by subjective factors: intention and interpretation. Continuity and change are both present: the trajectory of the group exhibits the dynamic interplay of two components of subjectivity, a change of focus in persistence of scheme, and a tension between stability and change. The paper illustrates the meanings the group gave to their practice while laying down mission-critical theoretical considerations – autopoiesis-. The aim of the work is to experience a fragment of phenomenological understanding (PU) of the cognitive dynamics of an STS-aware ICT uptake Laboratory during the digital revolution. PU is an intuitive going along the meaning, while staying close and present to the total situation of the phenomenon. Reading the codes that we observers invent in order to codify what nature is about, thus unveiling subconscious, autopoietic, reflexive feedback mechanisms of second order governance from work published over three decades by the ICT Lab, as if it were the narration of its cognitive autobiography. The paper brings points of discussion and insights of relevance for the STS community. It could be helpful in understanding the history of the community and in providing a platform for discussions on future developments. It can also serve as an inspiration and a historical capture for those entering the field.Keywords: phenomenology, subjectivity, autopoiesis, interpretation schemes, change for innovation, socio technical research, social study of information systems
Procedia PDF Downloads 34543 A Study on Employer Branding and Its Impacts on Employee’s
Authors: KVNKC Sharma, Soujanya Pasumarthi
Abstract:
Globalization, coupled with increase in competition is compelling organizations to adopt innovative strategies and identify core competencies in order to distinguish themselves from the competition. The capability of an organization is no longer determined by their products or services alone. The intellectual assets and quality of the human resource are fast emerging as key differentiators. Corporations are now positioning themselves as ‘brands’ not solely to market their products and services, but also to lure and to retain the best talent in the business. This paper identifies leadership as the ‘key element’ in developing an organization’s brand, which has a significant influence on the employee’s eventual perception of this external brand as portrayed by the organization. External branding incorporates innovation, consumer concern, trust, quality and sustainability. The paper contends that employees are indeed an organization’s ‘brand ambassadors. Internal branding involves taking care of these ambassadors of corporate brand i.e. human resource. If employees of an organization are not exposed to the organization’s branding (an ongoing process that functionally aligns, motivates and empower employees at all levels to consistently provide a satisfying customer experience), the external brand could be jeopardized. Internal branding, on the other hand, refers to employee’s perception of the organization’s brand. The current business environment can at best, be termed as volatile. Employees with the right technical and behavioral skills remain a scarce resource and the employers need to be ready to capture the attention, interest and commitment of the best and brightest candidates. This paper attempts to review and understand the relationship between employer branding and employee retention. The paper also seeks to identify potential impact of employer branding across all the factors affecting employees.Keywords: alignment, external branding, internal branding, leadership
Procedia PDF Downloads 304542 A Review of the Potential Impact of Employer Branding on Employee
Authors: K. V. N. K. C. Sharma
Abstract:
Globalization, coupled with increase in competition is compelling organizations to adopt innovative strategies and identify core competencies in order to distinguish themselves from the competition. The capability of an organization is no longer determined by their products or services alone. The intellectual assets and quality of the human resource are fast emerging as key differentiators. Corporations are now positioning themselves as ‘brands’ not solely to market their products and services, but also to lure and to retain the best talent in the business. This paper identifies leadership as the ‘key element’ in developing an organization’s brand, which has a significant influence on the employee’s eventual perception of this external brand as portrayed by the organization. External branding incorporates innovation, consumer concern, trust, quality and sustainability. The paper contends that employees are indeed an organization’s ‘brand ambassadors. Internal branding involves taking care of these ambassadors of corporate brand i.e. human resource. If employees of an organization are not exposed to the organization’s branding (an ongoing process that functionally aligns, motivates and empower employees at all levels to consistently provide a satisfying customer experience), the external brand could be jeopardized. Internal branding, on the other hand, refers to employee’s perception of the organization’s brand. The current business environment can at best, be termed as volatile. Employees with the right technical and behavioral skills remain a scarce resource and the employers need to be ready to capture the attention, interest and commitment of the best and brightest candidates. This paper attempts to review and understand the relationship between employer branding and employee retention. The paper also seeks to identify potential impact of employer branding across all the factors affecting employees.Keywords: external branding, organisation personnel, internal branding, leadership
Procedia PDF Downloads 240541 Preparation of Ternary Metal Oxide Aerogel Catalysts for Carbon Dioxide and Propylene Oxide Cycloaddition Reaction
Abstract:
CO2 is the primary greenhouse gas which causes global warming in recent years. As the carbon capture and storage (CCS) getting maturing, the reuse of carbon dioxide which made from CCS is the important issue. In this way, the most common method is the synthesis of cyclic carbonate chemicals from the cycloaddition reaction of carbon dioxide and epoxide. The catalyst plays an important role in the CO2/epoxide cycloaddition reactions. The Lewis acid and base sites are both needed on the catalyst surface for the help of epoxide ring opening, leading to the synthesis of cyclic carbonate. Furthermore, the larger specific surface area and more active site of the catalyst are also needed to enhance the efficiency of the CO2/epoxide cycloaddition reactions. Aerogel is a mesoporous nanomaterial (pore size between 2~50 nm) with high specific surface area and porosity (at least 90%) and low density. In this study, the ternary metal oxide aerogels, Mg-doped Al2O3 aerogels, with higher specific surface area and Lewis acid and base sites on the aerogel surface are successfully prepared by using a facile sol-gel reaction. The as-prepared Mg-doped Al2O3 aerogels are also served as heterogenous catalyst for the CO2/propylene- oxide cycloaddition reaction. Compared to the pristine Al2O3 aerogels, the Mg-doped Al2O3 aerogels possessed both Lewis acid and base sites on the surface are able to enhance the efficiency of the CO2/propylene oxide cycloaddition reactions. As a result, the as-prepared Mg-doped Al2O3 aerogels are a promising and novel catalyst for the CO2/epoxide cycloaddition reactions.Keywords: ternary, metal oxide aerogel, CO2 reuse, cycloaddition, propylene oxide
Procedia PDF Downloads 261540 A Qualitative Inquiry of Institutional Responsiveness in Public Land Development in the Urban Areas in Sri Lanka
Authors: Priyanwada I. Singhapathirana
Abstract:
The public land ownership is a common phenomenon in many countries in the world however, the development approaches and the institutional structures are greatly diverse. The existing scholarship around public land development has been greatly limited to Europe and advanced Asian economies. Inferences of such studies seem to be inadequate and inappropriate to comprehend the peculiarities of public land development in developing Asian economies. The absence of critical inquiry on the public land ownership and the long-established institutional structures which govern the development has restrained these countries from institutional innovations. In this context, this research investigates the issues related to public land development and the institutional responses in Sri Lanka. This study introduces the concept of ‘Institutional Responsiveness’ in Public land development, which is conceptualized as the ability of the institutions to respond to the spatial, market and fiscal stimulus. The inquiry was carried out through in-depth interviews with five key informants from apex public agencies in order to explore the responsiveness of land institutions form decision-makers' perspectives. Further, the analysis of grey literature and recent media reports are used to supplement the analysis. As per the findings, long term abandonment of public lands and high transaction costs are some of the key issues in relation to public land development. The inability of the institutions to respond to the market and fiscal stimulus has left many potential public lands underutilized. As a result, the public sector itself and urban citizens have not been able to relish the benefits of the public lands in cities. Spatial analysis at the local scale is suggested for future studies in order to capture the multiple dimensions of the responsiveness of institutions to the development stimulus.Keywords: institutions, public land, responsiveness, under-utilization
Procedia PDF Downloads 128539 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 276538 Smart Safari: Safari Guidance Mobile Application
Authors: D. P. Lawrence, T. M. M. D. Ariyarathna, W. N. K. De Silva, M. D. S. C. De Silva, Lasantha Abeysiri, Pradeep Abeygunawardhna
Abstract:
Safari traveling is one of the most famous hobbies all over the world. In Sri Lanka, 'Yala' is the second-largest national park, which is a better place to go for a safari. Many number of local and foreign travelers are coming to go for a safari in 'Yala'. But 'Yala' does not have a mobile application that is made to facilitate the traveler with some important features that the traveler wants to achieve in the safari experience. To overcome these difficulties, the proposed mobile application by adding those identified features to make travelers, guiders, and administration's works easier. The proposed safari traveling guidance mobile application is called 'SMART SAFARI' for the 'Yala' National Park in Sri Lanka. There are four facilities in this mobile application that provide for travelers as well as the guiders. As the first facility, the guider and traveler can view the created map of the park, and the guider can add temporary locations of animals and special locations on the map. This is a Geographic Information System (GIS) to capture, analyze, and display geographical data. And as the second facility is to generate optimal paths according to the travelers' requirements through the park by using machine learning techniques. In the third part, the traveler can get information about animals using an animal identification system by capturing the animal. As in the other facility, the traveler will be facilitated to add reviews and a rate and view those comments under categorized sections and pre-defined score range. With those facilities, this user-friendly mobile application provides the user to get a better experience in safari traveling, and it will probably help to develop tourism culture in Sri Lanka.Keywords: animal identification system, geographic information system, machine learning techniques, pre defined score range
Procedia PDF Downloads 134537 Synthesis of Human Factors Theories and Industry 4.0
Authors: Andrew Couch, Nicholas Loyd, Nathan Tenhundfeld
Abstract:
The rapid emergence of technology observably induces disruptive effects that carry implications for internal organizational dynamics as well as external market opportunities, strategic pressures, and threats. An examination of the historical tendencies of technology innovation shows that the body of managerial knowledge for addressing such disruption is underdeveloped. Fundamentally speaking, the impacts of innovation are unique and situationally oriented. Hence, the appropriate managerial response becomes a complex function that depends on the nature of the emerging technology, the posturing of internal organizational dynamics, the rate of technological growth, and much more. This research considers a particular case of mismanagement, the BP Texas City Refinery explosion of 2005, that carries notable discrepancies on the basis of human factors principles. Moreover, this research considers the modern technological climate (shaped by Industry 4.0 technologies) and seeks to arrive at an appropriate conceptual lens by which human factors principles and Industry 4.0 may be favorably integrated. In this manner, the careful examination of these phenomena helps to better support the sustainment of human factors principles despite the disruptive impacts that are imparted by technological innovation. In essence, human factors considerations are assessed through the application of principles that stem from usability engineering, the Swiss Cheese Model of accident causation, human-automation interaction, signal detection theory, alarm design, and other factors. Notably, this stream of research supports a broader framework in seeking to guide organizations amid the uncertainties of Industry 4.0 to capture higher levels of adoption, implementation, and transparency.Keywords: Industry 4.0, human factors engineering, management, case study
Procedia PDF Downloads 69536 Stereo Motion Tracking
Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox
Procedia PDF Downloads 327535 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 161534 Revisiting Domestication and Foreignisation Methods: Translating the Quran by the Hybrid Approach
Authors: Aladdin Al-Tarawneh
Abstract:
The Quran, as it is the sacred book of Islam and considered the literal word of God (Allah) in Arabic, is highly translated into many languages; however, the foreignising or the literal approach excessively stains the quality and discredits the final product in the eyes of its receptors. Such an approach fails to capture the intended meaning of the Quran and to communicate it in any language. Therefore, this study is conducted to propose a different approach that seeks involving other ones according to a hybrid model. Indeed, this study challenges the binary adherence that is highly used in Translation Studies (TS) in general and in the translation of the Quran in particular. Drawing on the genuine fact that the Quran can be communicated in any language in terms of meaning, and the translation is not sacred; this paper approaches the translation of the Quran by blending different methods like domestication or foreignisation in a systematic way, avoiding the binary choice made by many translators. To reach this aim, the paper has a conceptual part that seeks to elucidate and clarify the main methods employed in TS, and criticise and modify them to propose the new hybrid approach (the hybrid model) for translating the Quran – that is, the deductive method. To support and validate the outcome of the previous part, a comparative model is employed in order to highlight the differences between the suggested translation and other widely used ones – that is, the inductive method. By applying this methodology, the paper proves that there is a deficiency of communicating the original meaning of the Quran in light of the foreignising approach. In conclusion, the paper suggests producing a Quran translation has to take into account the adoption of many techniques to express the meaning of the Quran as understood in the original, and to offer this understanding in English in the most native-like manner to serve the intended target readers.Keywords: Quran translation, hybrid approach, domestication, foreignization, hybrid model
Procedia PDF Downloads 163533 Comparing Literary Publications about Corruption in South Africa to the Legal Position
Authors: Natasha Venter
Abstract:
Recent publications, including Truth to Power by André de Ruyter, Gangster State by Pieter-Louis Myburgh, and Enemy of the People by Pieter du Toit and Adriaan Basson, expose alleged corrupt acts by high-ranking members of State, as well as those in charge of State-owned entities. These literary contributions have gripped the attention of a nation plagued by corruption scandals and the alleged misappropriation of state funds on an almost daily basis. The books, however, leave the populace with the burning question of why “nothing happens” to these individuals who are so directly implicated in the literature. The process followed by the State in the largest successful prosecution of a corrupt state official, Jackie Selebi, sheds some light as to how such high-ranking persons might be brought to book. The Supreme Court of Appeal’s definition of corruption and the interpretation of the facts (as presented by the State prosecutors) by the court is also valuable. Furthermore, some insight into the laws that criminalise corruption in South Africa, as well as applicable international instruments, is necessary. South Africa is ranked as the 70th most corrupt country out of 180 countries by Transparency International’s 2021 Corruption Perceptions Index. This is worrisome as South Africa is a signatory of the United Nations Convention Against Corruption (2004) and, as such, has certain international obligations to fulfil. However, if the political will to prosecute corrupt officials in South Africa exists, there are laws and instruments available to punish these individuals. This would not only vindicate the authors of literature about corruption in the country but also restore the hope of South Africans that, ultimately, crime does not pay.Keywords: corruption, eskom, state capture, government, literature, united nations, law, legal, Jackie selebi, supreme court of appeal
Procedia PDF Downloads 100532 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors
Authors: Saeed Vahedikamal, Ian Hepburn
Abstract:
Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID
Procedia PDF Downloads 98531 Manifestations of Tuberculosis in Otorhinolaryngology Practice: A Retrospective Study Conducted in a Coastal City of South India
Authors: Rithika Sriram, Kiran M. Bhojwani
Abstract:
Introduction : Tuberculosis of the head and neck has proved to be a diagnostic challenge for otorhinolarynologists around the world. These lesions are often misdiagnosed as cancer. So in order to contribute to a better understanding of these lesions, we have conducted our study among patients affected by TB in the head and neck region with the objective of assessing the various manifestations, presentations, diagnostic techniques, risk factors such as smoking and alcohol consumption, coexisting illnesses and treatment modalities. Materials and Methods: This was a retrospective study conducted over a three year period (2012-2014) in 2 hospitals affliated to Kasturba Medical College in Mangalore, South India. A semi structured proforma was used to capture information from the medical records pertaining to the various objectives of the study such as clinical features and history of smoking. Data was analysed using SPSS version 16.0 and results obtained were depicted as percentages. Chi square test was used to find association between the variables and p<0.05 was considered statistically significant. Results: 104 patients were found to have TB of the head and neck and among them,the most common manifestation was found to be Tubercular Lymphadenitis (86.53%), followed by laryngeal TB (4.8%), submandibular gland TB (3.8%), deep neck space abscess(3.8%) and adenotonsillar TB. FNAC was found to be the gold standard for the diagnosis of TB disease of the lymph node.26% of the patients had coexisting HIV infection and 16.3% of the patients had associated pulmonary TB. More than 20% of the patients were smokers. Most patients were treated using ATT. Conclusion: Tuberculosis affecting regions of head and neck is no longer uncommon. Sufficient knowledge and appropriate diagnostic means is required while dealing with these lesions and must be included in the differential diagnosis of pathological lesions of head and neck.Keywords: FNAC, Mangalore, smoking, tuberculosis
Procedia PDF Downloads 278530 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI
Authors: Ananya Ananya, Karthik Rao
Abstract:
Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net
Procedia PDF Downloads 262