Search results for: direction of trade
342 An Exploratory Study of the Ghanaian Music Industry: Its Impacts on the Economy and Society
Authors: Ralph Nyadu-Addo, Francis Matambalya, Utz Dornberger
Abstract:
The global music industry is a multi-billion dollar sector. The potential of Africa’s music industry is widely recognised in the socio-economic development milieu. It has impacted positively on several sectors including most especially the tourism, media and information, communication technology (ICT) among others. It is becoming increasingly clear that even in Africa (as demonstrated in Nigeria) that in addition to its intrinsic value, the sector has significant economic returns. UNCTAD observed, the creative industries offer some of the best prospects for high growth in least developed countries. The statistics from Africa may be far lower than similar sectors in developed countries but it goes to give further credence to several UNCTAD publications which say the creative industry is under researched and its potential under-estimated but holds the key to its rapid development The emerging creative economy (music in particular) has become a leading component of economic growth, employment, trade, innovation, and social cohesion in many countries. In line with these developments, the Ghana government recognizes the potential that the Creative Industries have to shape and reinforce Ghana’s economic growth. Creative sectors, particularly music, tend to rely less on sophisticated infrastructure or capital-intensive investment. Potential is particularly abundant in Africa, where musical creativity is rich, diverse, well-loved, and constantly evolving while drawing on strong traditions. The development of a popular music industry thus represents low-hanging fruit for most African economies says the World Bank. As we shift towards economic diversification using the creative industry, value is increasingly created at the intersection of arts, business and technology. Cultural and creative entrepreneurs are leading this trend. It is one of the areas where value is captured within the country as emerging trends have shown in Nigeria and Ghana among others. Yet, evidence shows that the potential of the cultural and creative sectors remains largely untapped. Furthermore, its socio-economic impact remains under-researched in many developing countries and its dynamics unknown. Despite its huge influence on music repertoire across the globe, most countries in Africa have not historically been significant markets for the international music industry. Today, that is beginning to change. Generally, reliable and adequate literature about music in the sub-region is difficult to obtain. The growing interests in academia and business cycles about a reliable data on the growing music industry in developing countries have called for an urgent need to undertake this research. Research questions: i. Who are the major stakeholders in the music value chain in Ghana? ii. How much of value is captured domestically iii. What is the economic impact of the Ghanaian music industry iv. How has the advent of ICT (internet) impacted on the music landscape? Research sources will be mainly through interviews of major stakeholders, baseline study of the industry by KPMG and content analysis of related newspapers and magazines.Keywords: economic impact, information communications technology (ICT), music-industry, value chain
Procedia PDF Downloads 294341 Searching Knowledge for Engagement in a Worker Cooperative Society: A Proposal for Rethinking Premises
Authors: Soumya Rajan
Abstract:
While delving into the heart of any organization, the structural pre-requisites which form the framework of its system, allures and sometimes invokes great interest. In an attempt to understand the ecosystem of Knowledge that existed in organizations with diverse ownership and legal blueprints, Cooperative Societies, which form a crucial part of the neo-liberal movement in India, was studied. The exploration surprisingly led to the re-designing of at least a set of premises of the researcher on the drivers of engagement in an otherwise structured trade environment. The liberal organizational structure of Cooperative Societies has been empowered with certain terminologies: Voluntary, Democratic, Equality and Distributive Justice. To condense in Hubert Calvert’ words, ‘Co-operation is a form of organization wherein persons voluntarily associated together as human beings on the basis of equality for the promotion of the economic interest of themselves.’ In India, largely the institutions which work under this principle is registered under Cooperative Societies Act of the Central or State laws. A Worker Cooperative Society which originated as a movement in the state of Kerala and spread its wings across the country - Indian Coffee House was chosen as the enterprise for further inquiry for it being a living example and a highly successful working model in the designated space. The exploratory study reached out to employees and key stakeholders of Indian Coffee House to understand the nuances of the structure and the scope it provides for engagement. The key questions which formed shape in the mind of researcher while engaging in the inquiry were: How has the organization sustained despite its principle of accepting employees with no skills into employment and later training and empowering them? How can a system which has pre-independence and post-independence (independence here means the colonial independence from Great Britain) existence seek to engage employees within the premise of equality? How was the value of socialism ingrained in a commercial enterprise which has a turnover of several hundreds of Crores each year? How did the vision of a flat structure, way back in the 1940’s find its way into the organizational structure and has continued to remain as the way of life? These questions were addressed by the Case study research that ensued and placing Knowledge as the key premise, the possibilities of engagement of the organization man was pictured. Understanding that although the macro or holistic unit of analysis is the organization, it is pivotal to understand the structures and processes which best reflect on the actors. The embedded design which was adopted in this study delivered insights from the different stakeholder actors from diverse departments. While moving through variables which define and sometimes defy bounds in rationality, the study brought to light the inherent features of the organization structure and how it influences the actors who form a crucial part of the scheme of things. The research brought forth the key enablers for engagement and specifically explored the standpoint of knowledge in the larger structure of the Cooperative Society.Keywords: knowledge, organizational structure, engagement, worker cooperative
Procedia PDF Downloads 236340 Hawaii, Colorado, and Netherlands: A Comparative Analysis of the Respective Space Sectors
Authors: Mclee Kerolle
Abstract:
For more than 50 years, the state of Hawaii has had the beginnings of a burgeoning commercial aerospace presence statewide. While Hawaii provides the aerospace industry with unique assets concerning geographic location, lack of range safety issues and other factors critical to aerospace development, Hawaii’s strategy and commitment for aerospace have been unclear. For this reason, this paper presents a comparative analysis of Hawaii’s space sector with two of the world’s leading space sectors, Colorado and the Netherlands, in order to provide a strategic plan that establishes a firm position going forward to support Hawaii’s aerospace development statewide. This plan will include financial and other economic incentives legislatively supported by the State to help grow and diversify Hawaii’s aerospace sector. The first part of this paper will examine the business model adopted by the Colorado Space Coalition (CSC), a group of industry stakeholders working to make Colorado a center of excellence for aerospace, as blueprint for growth in Hawaii’s space sector. The second section of this paper will examine the business model adopted by the Netherlands Space Business Incubation Centre (NSBIC), a European Space Agency (ESA) affiliated program that offers business support for entrepreneurs to turn space-connected business ideas into commercial companies. This will serve as blueprint to incentivize space businesses to launch and develop in Hawaii. The third section of this paper will analyze the current policies both CSC, and NSBIC implores to promote industry expansion and legislative advocacy. The final section takes the findings from both space sectors and applies their most adaptable features to a Hawaii specific space business model that takes into consideration the unique advantage and disadvantages found in developing Hawaii’s space sector. The findings of this analysis will show that the development of a strategic plan based on a comparative analysis that creates high technology jobs and new pathways for a trained workforce in the space sector, as well as elicit state support and direction, will achieve the goal of establishing Hawaii as a center of space excellence. This analysis will also serve as a signal to the federal, private sector and international community that Hawaii is indeed serious about developing its’ aerospace industry. Ultimately this analysis and subsequent aerospace development plan will serve as a blueprint for the benefit of all space-faring nations seeking to develop their space sectors.Keywords: Colorado, Hawaii, Netherlands, space policy
Procedia PDF Downloads 169339 Steps of the Pancreatic Differentiation in the Grass Snake (Natrix natrix) Embryos
Authors: Magdalena Kowalska, Weronika Rupik
Abstract:
The pancreas is an important organ present in all vertebrate species. It contains two different tissues, exocrine and endocrine, that act as two glands in one. The development and differentiation of the pancreas in reptiles is poorly known in comparison to other vertebrates. Therefore, the aim of this study was to investigate the particular steps concerning the differentiation of the pancreas in the grass snake (Natrix natrix) embryos. For this, histological methods (including hematoxylin and eosin, and Heidenhain's AZAN staining), transmission electron microscopy and three-dimensional (3D) reconstructions from serial paraffin sections were used. The results of this study indicated that the first step of pancreas development in Natrix was the connection of the two pancreatic buds: dorsal and ventral one. Then, duct walls in both buds started to be remodeled from the multilayered to single-layered epithelium. This remodeling started in the dorsal bud and was simultaneously with the differentiation of the duct lumens which occurred by the cavition. During this process, the cells that had no contact with the mesenchyme underwent cell death named anoikis. These findings indicated that the walls of ducts in the embryonic pancreas of the grass snake were initially formed by the abundant principal and single endocrine cells. Later the basal and goblet cells differentiated. Among the endocrine cells, as the first the B and A cells differentiated, then the D and PP cells. The next step of the pancreatic development was the withdrawing of the endocrine cells from the duct walls to form the pancreatic islets. The endocrine cells and islets were found only in the dorsal part of the pancreas in Natrix embryos what is different than in other vertebrate species. The islets were formed mainly by the A cells. Simultaneously, with the differentiation of the endocrine pancreas, the acinar tissue started to differentiate. The source of the acinar cells were pancreatic ducts similar as in other vertebrates. The acini formation began at the proximal part of the pancreas and went towards the caudal direction. Differentiating pancreatic ducts developed into the branched system that can be divided into extralobular, intralobular, and intercalated ducts, similarly as in other vertebrate species. However, the pattern of branching was different. In conclusions, particular steps of the pancreas differentiation in the grass snake were different than in other vertebrates. It can be supposed that these differences are related to the specific topography of the snake’s internal organs and their taxonomy position. All specimens used in the study were captured according to the Polish regulations concerning the protection of wild species. Permission was granted by the Local Ethics Commission in Katowice (41/2010; 87/2015) and the Regional Directorate for Environmental Protection in Katowice (WPN.6401.257.2015.DC).Keywords: embryogenesis, organogenesis, pancreas, Squamata
Procedia PDF Downloads 171338 Intellectual Capital as Resource Based Business Strategy
Authors: Vidya Nimkar Tayade
Abstract:
Introduction: Intellectual capital of an organization is a key factor to success. Many companies invest a huge amount in their Research and development activities. Any innovation is helpful not only to that particular company but also to many other companies, industry and mankind as a whole. Companies undertake innovative changes for increasing their capital profitability and indirectly increase in pay packages of their employees. The quality of human capital can also improve due to such positive changes. Employees become more skilled and experienced due to such innovations and inventions. For increasing intangible capital, the author has referred to a couple of books and referred case studies to come to a conclusion. Different charts and tables are also referred to by the author. Case studies are more important because they are proven and established techniques. They enable students to apply theoretical concepts in real-world situations. It gives solutions to an open-ended problem with multiple potential solutions. There are three different strategies for undertaking intellectual capital increase. They are: Research push strategy/ Technology pushed approach, Market pull strategy/ approach and Open innovation strategy/approach. Research push strategy, In this strategy, research is undertaken and innovation is achieved on its own. After invention inventor company protects such invention and finds buyers for such invention. In this way, the invention is pushed into the market. In this method, research and development are undertaken first and the outcome of this research is commercialized. Market pull strategy, In this strategy, commercial opportunities are identified first and our research is concentrated in that particular area. For solving a particular problem, research is undertaken. It becomes easier to commercialize this type of invention. Because what is the problem is identified first and in that direction, research and development activities are carried on. Open invention strategy, In this type of research, more than one company enters into an agreement of research. The benefits of the outcome of this research will be shared by both companies. Internal and external ideas and technologies are involved. These ideas are coordinated and then they are commercialized. Due to globalization, people from the outside company are also invited to undertake research and development activities. Remuneration of employees of both the companies can increase and the benefit of commercialization of such invention is also shared by both the companies. Conclusion: In modern days, not only can tangible assets be commercialized, but also intangible assets can also be commercialized. The benefits of such an invention can be shared by more than one company. Competition can become more meaningful. Pay packages of employees can improve. It Is a need for time to adopt such strategies to benefit employees, competitors, stakeholders.Keywords: innovation, protection, management, commercialization
Procedia PDF Downloads 168337 A Prediction Method of Pollutants Distribution Pattern: Flare Motion Using Computational Fluid Dynamics (CFD) Fluent Model with Weather Research Forecast Input Model during Transition Season
Authors: Benedictus Asriparusa, Lathifah Al Hakimi, Aulia Husada
Abstract:
A large amount of energy is being wasted by the release of natural gas associated with the oil industry. This release interrupts the environment particularly atmosphere layer condition globally which contributes to global warming impact. This research presents an overview of the methods employed by researchers in PT. Chevron Pacific Indonesia in the Minas area to determine a new prediction method of measuring and reducing gas flaring and its emission. The method emphasizes advanced research which involved analytical studies, numerical studies, modeling, and computer simulations, amongst other techniques. A flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process releases emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the chemical composition of air and environment around the boundary layer mainly during transition season. Transition season in Indonesia is absolutely very difficult condition to predict its pattern caused by the difference of two air mass conditions. This paper research focused on transition season in 2013. A simulation to create the new pattern of the pollutants distribution is needed. This paper has outlines trends in gas flaring modeling and current developments to predict the dominant variables in the pollutants distribution. A Fluent model is used to simulate the distribution of pollutants gas coming out of the stack, whereas WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. Based on the running model, the most influence factor was wind speed. The goal of the simulation is to predict the new pattern based on the time of fastest wind and slowest wind occurs for pollutants distribution. According to the simulation results, it can be seen that the fastest wind (last of March) moves pollutants in a horizontal direction and the slowest wind (middle of May) moves pollutants vertically. Besides, the design of flare stack in compliance according to EPA Oil and Gas Facility Stack Parameters likely shows pollutants concentration remains on the under threshold NAAQS (National Ambient Air Quality Standards).Keywords: flare motion, new prediction, pollutants distribution, transition season, WRF model
Procedia PDF Downloads 556336 Inputs and Outputs of Innovation Processes in the Colombian Services Sector
Authors: Álvaro Turriago-Hoyos
Abstract:
Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.Keywords: Colombia, determinants of innovation, innovation, services sector
Procedia PDF Downloads 267335 Investigations at the Settlement of Oglankala
Authors: Ayten Tahirli
Abstract:
Settlements and grave monuments discovered by archeological excavations conducted in Nakhchivan Autonomous Republic have a special place in studying the Ancient history of Azerbaijan between the 4th century B.C. and the 3rd century A.C. From this point of view, the archeological excavations and investigations conducted at Oglankala, Goshatapa, Babatapa, Pusyan, Agvantapa, Meydantapa and other monuments in Nakhchivan have a specific place. From this point of view, the conclusions of archeological research conducted at the Oglankala settlement enable studying of Nakhchivan history, economic life and trade relationships broadly. Oglankala, which is located on Garatapa Mountain with a space of 50 ha, was the largest fortress in Nakhchivan and one of the largest fortresses in the South Caucasus during the Middle Iron Age. The territory where the monument is located is very important in terms of keeping Sharur Lowland, which has great importance for agriculture and is the most productive territory in Nakhchivan, where Arpachay passes starting from the Lesser Caucasus. During the excavations between 1988 and 1989 at Oglankala, covering the fortress's history belonging to the Early and Middle Iron Ages, indisputable proofs showing that the territory was an important political center were discovered at that territory. Oglankala was the capital city of an independent government during the Middle Iron Age. It maintained economic and cultural relationships with the neighboring Urartu Government and was the capital city of a city government covered by a strong protection system in the centuries after the collapse of the Achaemenid Empire. It is need say that broader archeological excavations at Oglankala City were first started by Vali Bakhshaliyev, the Department Head of the Institute of History, Ethnography and Archeology of ANAS Nakhchivan Branch. Between 1988 and 1989, V.B. Bakhshaliyev conducted an excavation within an area of 320 square meters at Oglankala. Since 2006, Oglankala has become a research object for the International Azerbaijan-USA archeological expedition. In 2006, Lauren Ristvet from Pennsylvania State University, Veli Bakhshaliyev from the Nakhchivan Branch of Azerbaijan National Academy of Sciences and Safar Ashurov from Baku Office of Azerbaijan National Academy of Sciences, together with their other colleagues and students, started to study the ancient history of that magic area. During the archeological research conducted by an international expedition between 2008 and 2011 under the supervision of Vali Bakhshaliyev, the remnants of a palace and the protective walls of a citadel constructed between late 9th century B.C. and early 8th century A.C. were discovered in that residential area. It was found out that Oglankala was the capital city of a small government established at Sharur Lowland during the Middle Iron Age and struggled against the Urartu by establishing a union with the local tribes. That government had a specific cuneiform script. Between the 4th and 2nd centuries B.C., Oglankala and the territory it covered was one of the major political centers of the Atropathena Government.Keywords: Nakhchivan, Oglankala, settlement, ceramic, archaeological excavation
Procedia PDF Downloads 78334 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting
Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava
Abstract:
A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.Keywords: selective laser melting, graphene, composite, mechanical property, tribological property
Procedia PDF Downloads 136333 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 139332 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 344331 Winkler Springs for Embedded Beams Subjected to S-Waves
Authors: Franco Primo Soffietti, Diego Fernando Turello, Federico Pinto
Abstract:
Shear waves that propagate through the ground impose deformations that must be taken into account in the design and assessment of buried longitudinal structures such as tunnels, pipelines, and piles. Conventional engineering approaches for seismic evaluation often rely on a Euler-Bernoulli beam models supported by a Winkler foundation. This approach, however, falls short in capturing the distortions induced when the structure is subjected to shear waves. To overcome these limitations, in the present work an analytical solution is proposed considering a Timoshenko beam and including transverse and rotational springs. The present research proposes ground springs derived as closed-form analytical solutions of the equations of elasticity including the seismic wavelength. These proposed springs extend the applicability of previous plane-strain models. By considering variations in displacements along the longitudinal direction, the presented approach ensures the springs do not approach zero at low frequencies. This characteristic makes them suitable for assessing pseudo-static cases, which typically govern structural forces in kinematic interaction analyses. The results obtained, validated against existing literature and a 3D Finite Element model, reveal several key insights: i) the cutoff frequency significantly influences transverse and rotational springs; ii) neglecting displacement variations along the structure axis (i.e., assuming plane-strain deformation) results in unrealistically low transverse springs, particularly for wavelengths shorter than the structure length; iii) disregarding lateral displacement components in rotational springs and neglecting variations along the structure axis leads to inaccurately low spring values, misrepresenting interaction phenomena; iv) transverse springs exhibit a notable drop in resonance frequency, followed by increasing damping as frequency rises; v) rotational springs show minor frequency-dependent variations, with radiation damping occurring beyond resonance frequencies, starting from negative values. This comprehensive analysis sheds light on the complex behavior of embedded longitudinal structures when subjected to shear waves and provides valuable insights for the seismic assessment.Keywords: shear waves, Timoshenko beams, Winkler springs, sol-structure interaction
Procedia PDF Downloads 61330 A Theoretical Approach of Tesla Pump
Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu
Abstract:
This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.Keywords: lubrication, temperature, tesla-pump, viscosity
Procedia PDF Downloads 179329 High Strength, High Toughness Polyhydroxybutyrate-Co-Valerate Based Biocomposites
Authors: S. Z. A. Zaidi, A. Crosky
Abstract:
Biocomposites is a field that has gained much scientific attention due to the current substantial consumption of non-renewable resources and the environmentally harmful disposal methods required for traditional polymer composites. Research on natural fiber reinforced polyhydroxyalkanoates (PHAs) has gained considerable momentum over the past decade. There is little work on PHAs reinforced with unidirectional (UD) natural fibers and little work on using epoxidized natural rubber (ENR) as a toughening agent for PHA-based biocomposites. In this work, we prepared polyhydroxybutyrate-co-valerate (PHBV) biocomposites reinforced with UD 30 wt.% flax fibers and evaluated the use of ENR with 50% epoxidation (ENR50) as a toughening agent for PHBV biocomposites. Quasi-unidirectional flax/PHBV composites were prepared by hand layup, powder impregnation followed by compression molding. Toughening agents – polybutylene adiphate-co-terephthalate (PBAT) and ENR50 – were cryogenically ground into powder and mechanically mixed with main matrix PHBV to maintain the powder impregnation process. The tensile, flexural and impact properties of the biocomposites were measured and morphology of the composites examined using optical microscopy (OM) and scanning electron microscopy (SEM). The UD biocomposites showed exceptionally high mechanical properties as compared to the results obtained previously where only short fibers have been used. The improved tensile and flexural properties were attributed to the continuous nature of the fiber reinforcement and the increased proportion of fibers in the loading direction. The improved impact properties were attributed to a larger surface area for fiber-matrix debonding and for subsequent sliding and fiber pull-out mechanisms to act on, allowing more energy to be absorbed. Coating cryogenically ground ENR50 particles with PHBV powder successfully inhibits the self-healing nature of ENR-50, preventing particles from coalescing and overcoming problems in mechanical mixing, compounding and molding. Cryogenic grinding, followed by powder impregnation and subsequent compression molding is an effective route to the production of high-mechanical-property biocomposites based on renewable resources for high-obsolescence applications such as plastic casings for consumer electronics.Keywords: natural fibers, natural rubber, polyhydroxyalkanoates, unidirectional
Procedia PDF Downloads 290328 Site Suitability of Offshore Wind Energy: A Combination of Geographic Referenced Information and Analytic Hierarchy Process
Authors: Ayat-Allah Bouramdane
Abstract:
Power generation from offshore wind energy does not emit carbon dioxide or other air pollutants and therefore play a role in reducing greenhouse gas emissions from the energy sector. In addition, these systems are considered more efficient than onshore wind farms, as they generate electricity from the wind blowing across the sea, thanks to the higher wind speed and greater consistency in direction due to the lack of physical interference that the land or human-made objects can present. This means offshore installations require fewer turbines to produce the same amount of energy as onshore wind farms. However, offshore wind farms require more complex infrastructure to support them and, as a result, are more expensive to construct. In addition, higher wind speeds, strong seas, and accessibility issues makes offshore wind farms more challenging to maintain. This study uses a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP) to identify the most suitable sites for offshore wind farm development in Morocco, with a particular focus on the Dakhla city. A range of environmental, socio-economic, and technical criteria are taken into account to solve this complex Multi-Criteria Decision-Making (MCDM) problem. Based on experts' knowledge, a pairwise comparison matrix at each level of the hierarchy is performed, and fourteen sub-criteria belong to the main criteria have been weighted to generate the site suitability of offshore wind plants and obtain an in-depth knowledge on unsuitable areas, and areas with low-, moderate-, high- and very high suitability. We find that wind speed is the most decisive criteria in offshore wind farm development, followed by bathymetry, while proximity to facilities, the sediment thickness, and the remaining parameters show much lower weightings rendering technical parameters most decisive in offshore wind farm development projects. We also discuss the potential of other marine renewable energy potential, in Morocco, such as wave and tidal energy. The proposed approach and analysis can help decision-makers and can be applied to other countries in order to support the site selection process of offshore wind farms.Keywords: analytic hierarchy process, dakhla, geographic referenced information, morocco, multi-criteria decision-making, offshore wind, site suitability
Procedia PDF Downloads 157327 Uniform and Controlled Cooling of a Steel Block by Multiple Jet Impingement and Airflow
Authors: E. K. K. Agyeman, P. Mousseau, A. Sarda, D. Edelin
Abstract:
During the cooling of hot metals by the circulation of water in canals formed by boring holes in the metal, the rapid phase change of the water due to the high initial temperature of the metal leads to a non homogenous distribution of the phases within the canals. The liquid phase dominates towards the entrance of the canal while the gaseous phase dominates towards the exit. As a result of the different thermal properties of both phases, the metal is not uniformly cooled. This poses a problem during the cooling of moulds, where a uniform temperature distribution is needed in order to ensure the integrity of the part being formed. In this study, the simultaneous use of multiple water jets and an airflow for the uniform and controlled cooling of a steel block is investigated. A circular hole is bored at the centre of the steel block along its length and a perforated steel pipe is inserted along the central axis of the hole. Water jets that impact the internal surface of the steel block are generated from the perforations in the steel pipe when the water within it is put under pressure. These jets are oriented in the opposite direction to that of gravity. An intermittent airflow is imposed in the annular space between the steel pipe and the surface of hole bored in the steel block. The evolution of the temperature with respect to time of the external surface of the block is measured with the help of thermocouples and an infrared camera. Due to the high initial temperature of the steel block (350 °C), the water changes phase when it impacts the internal surface of the block. This leads to high heat fluxes. The strategy used to control the cooling speed of the block is the intermittent impingement of its internal surface by the jets. The intervals of impingement and of non impingement are varied in order to achieve the desired result. An airflow is used during the non impingement periods as an additional regulator of the cooling speed and to improve the temperature homogeneity of the impinged surface. After testing different jet positions, jet speeds and impingement intervals, it’s observed that the external surface of the steel block has a uniform temperature distribution along its length. However, the temperature distribution along its width isn’t uniform with the maximum temperature difference being between the centre of the block and its edge. Changing the positions of the jets has no significant effect on the temperature distribution on the external surface of the steel block. It’s also observed that reducing the jet impingement interval and increasing the non impingement interval slows down the cooling of the block and improves upon the temperature homogeneity of its external surface while increasing the duration of jet impingement speeds up the cooling process.Keywords: cooling speed, homogenous cooling, jet impingement, phase change
Procedia PDF Downloads 125326 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 63325 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam
Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck
Abstract:
The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam
Procedia PDF Downloads 246324 Cross-Country Mitigation Policies and Cross Border Emission Taxes
Authors: Massimo Ferrari, Maria Sole Pagliari
Abstract:
Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.Keywords: climate change, general equilibrium, optimal taxation, monetary policy
Procedia PDF Downloads 160323 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course
Authors: Daniele Traversaro
Abstract:
Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share
Procedia PDF Downloads 188322 Analysis of Waterjet Propulsion System for an Amphibious Vehicle
Authors: Nafsi K. Ashraf, C. V. Vipin, V. Anantha Subramanian
Abstract:
This paper reports the design of a waterjet propulsion system for an amphibious vehicle based on circulation distribution over the camber line for the sections of the impeller and stator. In contrast with the conventional waterjet design, the inlet duct is straight for water entry parallel and in line with the nozzle exit. The extended nozzle after the stator bowl makes the flow more axial further improving thrust delivery. Waterjet works on the principle of volume flow rate through the system and unlike the propeller, it is an internal flow system. The major difference between the propeller and the waterjet occurs at the flow passing the actuator. Though a ducted propeller could constitute the equivalent of waterjet propulsion, in a realistic situation, the nozzle area for the Waterjet would be proportionately larger to the inlet area and propeller disc area. Moreover, the flow rate through impeller disk is controlled by nozzle area. For these reasons the waterjet design is based on pump systems rather than propellers and therefore it is important to bring out the characteristics of the flow from this point of view. The analysis is carried out using computational fluid dynamics. Design of waterjet propulsion is carried out adapting the axial flow pump design and performance analysis was done with three-dimensional computational fluid dynamics (CFD) code. With the varying environmental conditions as well as with the necessity of high discharge and low head along with the space confinement for the given amphibious vehicle, an axial pump design is suitable. The major problem of inlet velocity distribution is the large variation of velocity in the circumferential direction which gives rise to heavy blade loading that varies with time. The cavitation criteria have also been taken into account as per the hydrodynamic pump design. Generally, waterjet propulsion system can be parted into the inlet, the pump, the nozzle and the steering device. The pump further comprises an impeller and a stator. Analytical and numerical approaches such as RANSE solver has been undertaken to understand the performance of designed waterjet propulsion system. Unlike in case of propellers the analysis was based on head flow curve with efficiency and power curves. The modeling of the impeller is performed using rigid body motion approach. The realizable k-ϵ model has been used for turbulence modeling. The appropriate boundary conditions are applied for the domain, domain size and grid dependence studies are carried out.Keywords: amphibious vehicle, CFD, impeller design, waterjet propulsion
Procedia PDF Downloads 228321 Bridging the Gap and Widening the Divide
Authors: Lerato Dixon, Thorsten Chmura
Abstract:
This paper explores whether ethnic identity in Zimbabwe leads to discriminatory behaviour and the degree to which a norm-based intervention can shift this discriminatory behaviour. Social Identity Theory suggests that group identity can lead to favouritism towards the in-group and discriminatory behaviour towards the out-group. Agents yield higher utility from maintaining positive self-esteem by confirming with group behaviour. This paper focuses on the two majority ethnic groups in Zimbabwe – the Ndebele and Shona. Racial identities are synonymous with the language spoken. Zimbabwe’s history highlights how identity formation took place. As following independence, political parties became recognised as either Ndebele or Shona-speaking. It is against this backdrop that this study investigates the degree to which norm-based nudge can alter behaviour. This paper uses experimental methods to analyse discriminatory behaviour between two naturally occurring ethnic groups in Zimbabwe. In addition, we investigate if social norm-based interventions can shift discriminatory behaviour to understand if the divide between these two identity groups can be further divided or healed. Participants are randomly assigned into three groups to receive information regarding a social norm. We compare the effect of a proscriptive social norm-based intervention, stating what shouldn't be done and prescriptive social norms as interventions, stating what should be done. Specifically, participants are either shown the socially appropriate (Heal) norm, the socially inappropriateness (Divide) norm regarding interethnic marriages or no norm-based intervention. Following the random assignment into intervention groups, participants take part in the Trust Game. We conjecture that discrimination will shift in accordance with the prevailing social norm. Instead, we find evidence of interethnic discriminatory behaviour. We also find that trust increases when interacting with Ndebele, Shona and Zimbabwean participants following the Heal intervention. However, if the participant is Shona, the Heal intervention decreases trust toward in-groups and Zimbabwean co-players. On the other hand, if the participant is Shona, the Divide treatment significantly increases trust toward Ndebele participants. In summary, we find evidence that norm-based interventions significantly change behaviour. However, the prescriptive norm-based intervention (Heal) decreases trust toward the in-group, out-group and national identity group if the participant is Shona – therefore having an adverse effect. In contrast, the proscriptive Divide treatment increases trust if the participant is Shona towards Ndebele co-players. We conclude that norm-based interventions have a ‘rebound’ effect by altering behaviour in the opposite direction.Keywords: discrimination, social identity, social norm-based intervention, zimbabwe
Procedia PDF Downloads 250320 Combination of Plantar Pressure and Star Excursion Balance Test for Evaluation of Dynamic Posture Control on High-Heeled Shoes
Authors: Yan Zhang, Jan Awrejcewicz, Lin Fu
Abstract:
High-heeled shoes force the foot into plantar flexion position resulting in foot arch rising and disturbance of the articular congruence between the talus and tibiofibular mortice, all of which may increase the challenge of balance maintenance. Plantar pressure distribution of the stance limb during the star excursion balance test (SEBT) contributes to the understanding of potential sources of reaching excursions in SEBT. The purpose of this study is to evaluate the dynamic posture control while wearing high-heeled shoes using SEBT in a combination of plantar pressure measurement. Twenty healthy young females were recruited. Shoes of three heel heights were used: flat (0.8 cm), low (4.0 cm), high (6.6 cm). The testing grid of SEBT consists of three lines extending out at 120° from each other, which were defined as anterior, posteromedial, and posterolateral directions. Participants were instructed to stand on their dominant limb with the heel in the middle of the testing grid and hands on hips and to reach the non-stance limb as far as possible towards each direction. The distal portion of the reaching limb lightly touched the ground without shifting weight. Then returned the reaching limb to the beginning position. The excursion distances were normalized to leg length. The insole plantar measurement system was used to record peak pressure, contact area, and pressure-time integral of the stance limb. Results showed that normalized excursion distance decreased significantly as heel height increased. The changes of plantar pressure in SEBT as heel height increased were more obvious in the medial forefoot (MF), medial midfoot (MM), rearfoot areas. At MF, the peak pressure and pressure-time integral of low and high shoes increased significantly compared with that of flat shoes, while the contact area decreased significantly as heel height increased. At MM, peak pressure, contact area, and pressure-time integral of high and low shoes were significantly lower than that of flat shoes. To reduce posture instability, the stance limb plantar loading shifted to medial forefoot. Knowledge of this study identified dynamic posture control deficits while wearing high-heeled shoes and the critical role of the medial forefoot in dynamic balance maintenance.Keywords: dynamic posture control, high-heeled shoes, plantar pressure, star excursion balance test.
Procedia PDF Downloads 134319 Building Exoskeletons for Seismic Retrofitting
Authors: Giuliana Scuderi, Patrick Teuffel
Abstract:
The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting
Procedia PDF Downloads 420318 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance
Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens
Abstract:
Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium
Procedia PDF Downloads 66317 The Relationship between the Skill Mix Model and Patient Mortality: A Systematic Review
Authors: Yi-Fung Lin, Shiow-Ching Shun, Wen-Yu Hu
Abstract:
Background: A skill mix model is regarded as one of the most effective methods of reducing nursing shortages, as well as easing nursing staff workloads and labor costs. Although this model shows several benefits for the health workforce, the relationship between the optimal model of skill mix and the patient mortality rate remains to be discovered. Objectives: This review aimed to explore the relationship between the skill mix model and patient mortality rate in acute care hospitals. Data Sources: A systematic search of the PubMed, Web of Science, Embase, and Cochrane Library databases and researchers retrieved studies published between January 1986 and March 2022. Review methods: Two independent reviewers screened the titles and abstracts based on selection criteria, extracted the data, and performed critical appraisals using the STROBE checklist of each included study. The studies focused on adult patients in acute care hospitals, and the skill mix model and patient mortality rate were included in the analysis. Results: Six included studies were conducted in the USA, Canada, Italy, Taiwan, and European countries (Belgium, England, Finland, Ireland, Spain, and Switzerland), including patients in medical, surgical, and intensive care units. There were both nurses and nursing assistants in their skill mix team. This main finding is that three studies (324,592 participants) show evidence of fewer mortality rates associated with hospitals with a higher percentage of registered nurse staff (range percentage of registered nurse staff 36.1%-100%), but three articles (1,122,270 participants) did not find the same result (range of percentage of registered nurse staff 46%-96%). However, based on appraisal findings, those showing a significant association all meet good quality standards, but only one-third of their counterparts. Conclusions: In light of the limited amount and quality of published research in this review, it is prudent to treat the findings with caution. Although the evidence is not insufficient certainty to draw conclusions about the relationship between nurse staffing level and patients' mortality, this review lights the direction of relevant studies in the future. The limitation of this article is the variation in skill mix models among countries and institutions, making it impossible to do a meta-analysis to compare them further.Keywords: nurse staffing level, nursing assistants, mortality, skill mix
Procedia PDF Downloads 116316 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 452315 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 147314 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 336313 Festival Gamification: Conceptualization and Scale Development
Authors: Liu Chyong-Ru, Wang Yao-Chin, Huang Wen-Shiung, Tang Wan-Ching
Abstract:
Although gamification has been concerned and applied in the tourism industry, limited literature could be found in tourism academy. Therefore, to contribute knowledge in festival gamification, it becomes essential to start by establishing a Festival Gamification Scale (FGS). This study defines festival gamification as the extent of a festival to involve game elements and game mechanisms. Based on self-determination theory, this study developed an FGS. Through the multi-study method, in study one, five FGS dimensions were sorted through literature review, followed by twelve in-depth interviews. A total of 296 statements were extracted from interviews and were later narrowed down to 33 items under six dimensions. In study two, 226 survey responses were collected from a cycling festival for exploratory factor analysis, resulting in twenty items under five dimensions. In study three, 253 survey responses were obtained from a marathon festival for confirmatory factor analysis, resulting in the final sixteen items under five dimensions. Then, results of criterion-related validity confirmed the positive effects of these five dimensions on flow experience. In study four, for examining the model extension of the developed five-dimensional 16-item FGS, which includes dimensions of relatedness, mastery, competence, fun, and narratives, cross-validation analysis was performed using 219 survey responses from a religious festival. For the tourism academy, the FGS could further be applied in other sub-fields such as destinations, theme parks, cruise trips, or resorts. The FGS serves as a starting point for examining the mechanism of festival gamification in changing tourists’ attitudes and behaviors. Future studies could work on follow-up studies of FGS by testing outcomes of festival gamification or examining moderating effects of enhancing outcomes of festival gamification. On the other hand, although the FGS has been tested in cycling, marathon, and religious festivals, the research settings are all in Taiwan. Cultural differences of FGS is another further direction for contributing knowledge in festival gamification. This study also contributes to several valuable practical implications. First, this FGS could be utilized in tourist surveys for evaluating the extent of gamification of a festival. Based on the results of the performance assessment by FGS, festival management organizations and festival planners could learn the relative scores among dimensions of FGS, and plan for future improvement of gamifying the festival. Second, the FGS could be applied in positioning a gamified festival. Festival management organizations and festival planners could firstly consider the features and types of their festival, and then gamify their festival based on investing resources in key FGS dimensions.Keywords: festival gamification, festival tourism, scale development, self-determination theory
Procedia PDF Downloads 147