Search results for: body-fitted coordinate
56 Dynamic Response around Inclusions in Infinitely Inhomogeneous Media
Authors: Jinlai Bian, Zailin Yang, Guanxixi Jiang, Xinzhu Li
Abstract:
The problem of elastic wave propagation in inhomogeneous medium has always been a classic problem. Due to the frequent occurrence of earthquakes, many economic losses and casualties have been caused, therefore, to prevent earthquake damage to people and reduce damage, this paper studies the dynamic response around the circular inclusion in the whole space with inhomogeneous modulus, the inhomogeneity of the medium is reflected in the shear modulus of the medium with the spatial position, and the density is constant, this method can be used to solve the problem of the underground buried pipeline. Stress concentration phenomena are common in aerospace and earthquake engineering, and the dynamic stress concentration factor (DSCF) is one of the main factors leading to material damage, one of the important applications of the theory of elastic dynamics is to determine the stress concentration in the body with discontinuities such as cracks, holes, and inclusions. At present, the methods include wave function expansion method, integral transformation method, integral equation method and so on. Based on the complex function method, the Helmholtz equation with variable coefficients is standardized by using conformal transformation method and wave function expansion method, the displacement and stress fields in the whole space with circular inclusions are solved in the complex coordinate system, the unknown coefficients are solved by using boundary conditions, by comparing with the existing results, the correctness of this method is verified, based on the superiority of the complex variable function theory to the conformal transformation, this method can be extended to study the inclusion problem of arbitrary shapes. By solving the dynamic stress concentration factor around the inclusions, the influence of the inhomogeneous parameters of the medium and the wavenumber ratio of the inclusions to the matrix on the dynamic stress concentration factor is analyzed. The research results can provide some reference value for the evaluation of nondestructive testing (NDT), oil exploration, seismic monitoring, and soil-structure interaction.Keywords: circular inclusions, complex variable function, dynamic stress concentration factor (DSCF), inhomogeneous medium
Procedia PDF Downloads 13655 Walk the Line: Public Space and the Essence of Perception, a Case Study of a Beachfront Promenade, Durban, South Africa
Authors: C. Greenstone, R. Hansmann, L. Mbandla, J. Houghton, G. Lincoln
Abstract:
All beach areas in South Africa are constituted as public land, open to all to walk on or swim in these spaces. With the completed development of the Durban promenade in 2023 from the Umgeni estuary to the harbour entrance, Durban’s beachfront promenade is a notable example of innovative urban design that has transformed formerly segregated spaces into a paved public walkway linking over 15 beaches. Public spaces, however, are not all created equally, with ideas on how individuals and groups become the producers of space can be a useful tool for designing future public spaces. It is the role of planners, architects, and other built environment specialists to ensure inclusivity and symbiosis between how spaces are separated and then created. Lefebvre’s ideas about space and social practice provide a foundation for this research, specifically on how spaces like the Durban promenade function as democratic spaces, highlighting questions of what draws individuals and groups to certain areas on the promenade are important. Some public spaces are well designed, accessible, and inclusive, while others create more contestation amongst users. In this research article, Durban’s beachfront promenade is used as a case study to better understand the creation of public spaces, more specifically regarding who uses the beachfront promenade and for what reasons. This research adopts a phenomenological and descriptive approach to understanding place by exploring human experiences, emotions and meanings individuals attach to the beachfront promenade and how social processes and interactions shape our understanding, experience, and meaning of places. This research will collect both qualitative and quantitative data along specific nodes located on the beachfront promenade to determine the number of visitors based on demographics such as age, race and gender and the perceptions of visitors to these nodes. The aim is to coordinate and give meaning through surveys and visitor observations to better understand the type of perceptions people have of these spaces, for example, the rationale for utilising space, which already encompasses several activities ranging from cultural, social, economic, environmental, spiritual, physical as well as numerous others.Keywords: perceptions of space, social practice, identity, urban planning, public space
Procedia PDF Downloads 1454 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis
Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc
Abstract:
Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation
Procedia PDF Downloads 21753 Wireless Gyroscopes for Highly Dynamic Objects
Authors: Dmitry Lukyanov, Sergey Shevchenko, Alexander Kukaev
Abstract:
Modern MEMS gyroscopes have strengthened their position in motion control systems and have led to the creation of tactical grade sensors (better than 15 deg/h). This was achieved by virtue of the success in micro- and nanotechnology development, cooperation among international experts and the experience gained in the mass production of MEMS gyros. This production is knowledge-intensive, often unique and, therefore, difficult to develop, especially due to the use of 3D-technology. The latter is usually associated with manufacturing of inertial masses and their elastic suspension, which determines the vibration and shock resistance of gyros. Today, consumers developing highly dynamic objects or objects working under extreme conditions require the gyro shock resistance of up to 65 000 g and the measurement range of more than 10 000 deg/s. Such characteristics can be achieved by solid-state gyroscopes (SSG) without inertial masses or elastic suspensions, which, for example, can be constructed with molecular kinetics of bulk or surface acoustic waves (SAW). Excellent effectiveness of this sensors production and a high level of structural integration provides basis for increased accuracy, size reduction and significant drop in total production costs. Existing principles of SAW-based sensors are based on the theory of SAW propagation in rotating coordinate systems. A short introduction to the theory of a gyroscopic (Coriolis) effect in SAW is provided in the report. Nowadays more and more applications require passive and wireless sensors. SAW-based gyros provide an opportunity to create one. Several design concepts incorporating reflective delay lines were proposed in recent years, but faced some criticism. Still, the concept is promising and is being of interest in St. Petersburg Electrotechnical University. Several experimental models were developed and tested to find the minimal configuration of a passive and wireless SAW-based gyro. Structural schemes, potential characteristics and known limitations are stated in the report. Special attention is dedicated to a novel method of a FEM modeling with piezoelectric and gyroscopic effects simultaneously taken into account.Keywords: FEM simulation, gyroscope, OOFELIE, surface acoustic wave, wireless sensing
Procedia PDF Downloads 36752 Raising the Property Provisions of the Topographic Located near the Locality of Gircov, Romania
Authors: Carmen Georgeta Dumitrache
Abstract:
Measurements of terrestrial science aims to study the totality of operations and computing, which are carried out for the purposes of representation on the plan or map of the land surface in a specific cartographic projection and topographic scale. With the development of society, the metrics have evolved, and they land, being dependent on the achievement of a goal-bound utility of economic activity and of a scientific purpose related to determining the form and dimensions of the Earth. For measurements in the field, data processing and proper representation on drawings and maps of planimetry and landform of the land, using topographic and geodesic instruments, calculation and graphical reporting, which requires a knowledge of theoretical and practical concepts from different areas of science and technology. In order to use properly in practice, topographical and geodetic instruments designed to measure precise angles and distances are required knowledge of geometric optics, precision mechanics, the strength of materials, and more. For processing, the results from field measurements are necessary for calculation methods, based on notions of geometry, trigonometry, algebra, mathematical analysis and computer science. To be able to illustrate topographic measurements was established for the lifting of property located near the locality of Gircov, Romania. We determine this total surface of the plan (T30), parcel/plot, but also in the field trace the coordinates of a parcel. The purpose of the removal of the planimetric consisted of: the exact determination of the bounding surface; analytical calculation of the surface; comparing the surface determined with the one registered in the documents produced; drawing up a plan of location and delineation with closeness and distance contour, as well as highlighting the parcels comprising this property; drawing up a plan of location and delineation with closeness and distance contour for a parcel from Dave; in the field trace outline of plot points from the previous point. The ultimate goal of this work was to determine and represent the surface, but also to tear off a plot of the surface total, while respecting the first surface condition imposed by the Act of the beneficiary's property.Keywords: topography, surface, coordinate, modeling
Procedia PDF Downloads 25851 Optimal Placement of the Unified Power Controller to Improve the Power System Restoration
Authors: Mohammad Reza Esmaili
Abstract:
One of the most important parts of the restoration process of a power network is the synchronizing of its subsystems. In this situation, the biggest concern of the system operators will be the reduction of the standing phase angle (SPA) between the endpoints of the two islands. In this regard, the system operators perform various actions and maneuvers so that the synchronization operation of the subsystems is successfully carried out and the system finally reaches acceptable stability. The most common of these actions include load control, generation control and, in some cases, changing the network topology. Although these maneuvers are simple and common, due to the weak network and extreme load changes, the restoration will be associated with low speed. One of the best ways to control the SPA is to use FACTS devices. By applying a soft control signal, these tools can reduce the SPA between two subsystems with more speed and accuracy, and the synchronization process can be done in less time. Meanwhile, the unified power controller (UPFC), a series-parallel compensator device with the change of transmission line power and proper adjustment of the phase angle, will be the proposed option in order to realize the subject of this research. Therefore, with the optimal placement of UPFC in a power system, in addition to improving the normal conditions of the system, it is expected to be effective in reducing the SPA during power system restoration. Therefore, the presented paper provides an optimal structure to coordinate the three problems of improving the division of subsystems, reducing the SPA and optimal power flow with the aim of determining the optimal location of UPFC and optimal subsystems. The proposed objective functions in this paper include maximizing the quality of the subsystems, reducing the SPA at the endpoints of the subsystems, and reducing the losses of the power system. Since there will be a possibility of creating contradictions in the simultaneous optimization of the proposed objective functions, the structure of the proposed optimization problem is introduced as a non-linear multi-objective problem, and the Pareto optimization method is used to solve it. The innovative technique proposed to implement the optimization process of the mentioned problem is an optimization algorithm called the water cycle (WCA). To evaluate the proposed method, the IEEE 39 bus power system will be used.Keywords: UPFC, SPA, water cycle algorithm, multi-objective problem, pareto
Procedia PDF Downloads 6750 Administrative Traits and Capabilities of Mindanao State University Heads of Office as Perceived by Their Subordinates
Authors: Johanida L. Etado
Abstract:
The study determined the Administrative traits and capabilities of Mindanao State University Heads of office as perceived by their respondents. Specifically, this study attempted to find out: To get the primary data, a self- constructed survey questionnaire which was validated by a panel of experts, including the adviser. Most of the MSU head of office were aware of their duties and responsibilities as a manager. Considering their vast knowledge and expertise on the technical or task aspects of the job, it is not surprising that respondents perceived them to a high degree as work or task oriented. MSU head of office were knowledgeable and capable in performing field-specific, specialized tasks and enabling them to coordinate work, solve problems, communicate effectively, and also understand the big picture in light of the front-line work that must be performed. The significance of coaching or mentoring in this instance may be explained by the less number of Master’s or Doctorate degree holder among employees resulting to close supervision and mentorship of head of office towards the latter; Without comparison, interpersonal or human relation capabilities is a very effective way in dealing with people as it gives them the opportunity to influence their employees. In the case of MSU head of office, the best way of dealing with problematic employees is by establishing trust and allowing them to partake in the decision making even on setting organizational goals as it would make them feel part of the organization; Thus, it is recommended that the success of an organization depends largely with the effectiveness of the head of unit. In this case, being development oriented would mean encouraging both head officers & employees to know not only the technical know hoe of the organisation but also the visions, missions, goals & the latter’s aspirations to establish cooperation & harmonious working environment; hence, orientation & reorientation time to time would enable them to be more development oriented; With respect to human relations, effective interpersonal relationship between head of unit & employee is of paramount importance. In order to strengthen the relationship between the two, the management should establish an upward & downward communication where two parties will have to establish an open & transparent communication, either through verbal & non-verbal one.Keywords: administrator, administrative traits, leadership traits, work orientation
Procedia PDF Downloads 7249 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester
Authors: Robert Long
Abstract:
The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.Keywords: complexity, accuracy, fluency, writing
Procedia PDF Downloads 4248 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator
Authors: Gómez R. Marta, Martín M. Jesús María
Abstract:
The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.Keywords: atmospheric dispersion, dioxin, furan, incinerator
Procedia PDF Downloads 21747 Diplomacy in Times of Disaster: Management through Reputational Capital
Authors: Liza Ireni-Saban
Abstract:
The 6.6 magnitude quake event that occurred in 2003 (Bam, Iran) made it impossible for the Iranian government to handle disaster relief efforts domestically. In this extreme event, the Iranian government reached out to the international community, and this created a momentum that had to be carried out by trust-building efforts on all sides, often termed ‘Disaster Diplomacy’. Indeed, the circumstances were even more critical when one considers the increasing political and economic isolation of Iran within the international community. The potential for transformative political space to be opened by disaster has been recognized by dominant international political actors. Despite the fact that Bam 2003 post-disaster relief efforts did not catalyze any diplomatic activities on all sides, it is suggested that few international aid agencies have successfully used disaster recovery to enhance their popular legitimacy and reputation among the international community. In terms of disaster diplomacy, an actor’s reputational capital may affect his ability to build coalitions and alliances to achieve international political ends, to negotiate and build understanding and trust with foreign publics. This study suggests that the post-disaster setting may benefit from using the ecology of games framework to evaluate the role of bridging actors and mediators in facilitating collaborative governance networks. Recent developments in network theory and analysis provide means of structural embeddedness to explore how reputational capital can be built through brokerage roles of actors engaged in a disaster management network. This paper then aims to structure the relations among actors that participated in the post-disaster relief efforts in the 2003 Bam earthquake (Iran) in order to assess under which conditions actors may be strategically utilized to serve as mediating organizations for future disaster events experienced by isolated nations or nations in conflict. The results indicate the strategic use of reputational capital by the Iranian Ministry of Foreign Affairs as key broker to build a successful coordinative system for reducing disaster vulnerabilities. International aid agencies rarely played brokerage roles to coordinate peripheral actors. U.S. foreign assistance (USAID), despite coordination capacities, was prevented from serving brokerage roles in the system.Keywords: coordination, disaster diplomacy, international aid organizations, Iran
Procedia PDF Downloads 15646 Changes in Skin Microbiome Diversity According to the Age of Xian Women
Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An
Abstract:
Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.Keywords: next generation sequencing, age, Xian, skin microbiome
Procedia PDF Downloads 15645 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand
Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat
Abstract:
Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting
Procedia PDF Downloads 18944 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 8743 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 5042 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe
Authors: Elsadig Naseraddeen Ahmed Mohamed
Abstract:
In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon
Procedia PDF Downloads 17741 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System
Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich
Abstract:
The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.Keywords: automated vehicle, driver behavior, human factors, human-machine system
Procedia PDF Downloads 14740 Virtual Team Performance: A Transactive Memory System Perspective
Authors: Belbaly Nassim
Abstract:
Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination
Procedia PDF Downloads 17439 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy
Authors: Mona Rashidirad, Hamid Salimian
Abstract:
Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy
Procedia PDF Downloads 24138 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 29537 The Markers -mm and dämmo in Amharic: Developmental Approach
Authors: Hayat Omar
Abstract:
Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics
Procedia PDF Downloads 13436 Adjustment of the Whole-Body Center of Mass during Trunk-Flexed Walking across Uneven Ground
Authors: Soran Aminiaghdam, Christian Rode, Reinhard Blickhan, Astrid Zech
Abstract:
Despite considerable studies on the impact of imposed trunk posture on human walking, less is known about such locomotion while negotiating changes in ground level. The aim of this study was to investigate the behavior of the VBCOM in response to a two-fold expected perturbation, namely alterations in body posture and in ground level. To this end, the kinematic data and ground reaction forces of twelve able participants were collected. We analyzed the vertical position of the body center of mass (VBCOM) from the ground determined by the body segmental analysis method relative to the laboratory coordinate system at touchdown and toe-off instants during walking across uneven ground — characterized by perturbation contact (a 10-cm visible drop) and pre- and post-perturbation contacts — in comparison to unperturbed level contact while maintaining three postures (regular erect, ~30° and ~50° of trunk flexion from the vertical). The VBCOM was normalized to the distance between the greater trochanter marker and the lateral malleoli marker at the instant of TD. Moreover, we calculated the backward rotation during step-down as the difference of the maximum of the trunk angle in the pre-perturbation contact and the minimal trunk angle in the perturbation contact. Two-way repeated measures ANOVAs revealed contact-specific effects of posture on the VBCOM at touchdown (F = 5.96, p = 0.00). As indicated by the analysis of simple main effects, during unperturbed level and pre-perturbation contacts, no between-posture differences for the VBCOM at touchdown were found. In the perturbation contact, trunk-flexed gaits showed a significant increase of VBCOM as compared to the pre-perturbation contact. In the post-perturbation contact, the VBCOM demonstrated a significant decrease in all gait postures relative to the preceding corresponding contacts with no between-posture differences. Main effects of posture revealed that the VBCOM at toe-off significantly decreased in trunk-flexed gaits relative to the regular erect gait. For the main effect of contact, the VBCOM at toe-off demonstrated changes across perturbation and post-perturbation contacts as compared to the unperturbed level contact. Furthermore, participants exhibited a backward trunk rotation during step-down possibly to control the angular momentum of their whole body. A more pronounced backward trunk rotation (2- to 3-fold compared with level contacts) in trunk-flexed walking contributed to the observed elevated VBCOM during the step-down which may have facilitated drop negotiation. These results may shed light on the interaction between posture and locomotion in able gait, and specifically on the behavior of the body center of mass during perturbed locomotion.Keywords: center of mass, perturbation, posture, uneven ground, walking
Procedia PDF Downloads 18235 Principles and Guidance for the Last Days of Life: Te Ara Whakapiri
Authors: Tania Chalton
Abstract:
In June 2013, an independent review of the Liverpool Care Pathway (LCP) identified a number of problems with the implementation of the LCP in the UK and recommended that it be replaced by individual care plans for each patient. As a result of the UK findings, in November 2013 the Ministry of Health (MOH) commissioned the Palliative Care Council to initiate a programme of work to investigate an appropriate approach for the care of people in their last days of life in New Zealand (NZ). The Last Days of Life Working Group commenced a process to develop national consensus on the care of people in their last days of life in April 2014. In order to develop its advice for the future provision of care to people in their last days of life, the Working Group (WG) established a comprehensive work programme and as a result has developed a series of working papers. Specific areas of focus included: An analysis of the UK Independent Review findings and an assessment of these findings to the NZ context. A stocktake of services providing care to people in their last days of life, including aged residential care (ARC); hospices; hospitals; and primary care. International and NZ literature reviews of evidence and best practice. Survey of family to understand the consumer perspective on the care of people in their last days of life. Key aspects of care that required further considerations for NZ were: Terminology: clarify terminology used in the last days of life and in relation to death and dying. Evidenced based: including specific review of evidence regarding, spiritual, culturally appropriate care as well as dementia care. Diagnosis of dying: need for both guidance around the diagnosis of dying and communication with family. Workforce issues: access to an appropriate workforce after hours. Nutrition and hydration: guidance around appropriate approaches to nutrition and hydration. Symptom and pain management: guidance around symptom management. Documentation: documentation of the person’s care which is robust enough for data collection and auditing requirements, not ‘tick box’ approach to care. Education and training: improved consistency and access to appropriate education and training. Leadership: A dedicated team or person to support and coordinate the introduction and implementation of any last days of life model of care. Quality indicators and data collection: model of care to enable auditing and regular reviews to ensure on-going quality improvement. Cultural and spiritual: address and incorporate any cultural and spiritual aspects. A final document was developed incorporating all the evidence which provides guidance to the health sector on best practice for people at end of life: “Principles and guidance for the last days of life: Te Ara Whakapiri”.Keywords: end of life, guidelines, New Zealand, palliative care
Procedia PDF Downloads 43534 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos
Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling
Abstract:
Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.Keywords: boredom, engagement, music videos, posture, proxemics
Procedia PDF Downloads 16733 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 10132 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model
Authors: Kang Lin Peng
Abstract:
Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature
Procedia PDF Downloads 12931 Analyzing the Performance of the Philippine Disaster Risk Reduction and Management Act of 2010 as Framework for Managing and Recovering from Large-Scale Disasters: A Typhoon Haiyan Recovery Case Study
Authors: Fouad M. Bendimerad, Jerome B. Zayas, Michael Adrian T. Padilla
Abstract:
With the increasing scale of severity and frequency of disasters worldwide, the performance of governance systems for disaster risk reduction and management in many countries are being put to the test. In the Philippines, the Disaster Risk Reduction and Management (DRRM) Act of 2010 (Republic Act 10121 or RA 10121) as the framework for disaster risk reduction and management was tested when Super Typhoon Haiyan hit the eastern provinces of the Philippines in November 2013. Typhoon Haiyan is considered to be the strongest recorded typhoon in history to make landfall with winds exceeding 252 km/hr. In assessing the performance of RA 10121 the authors conducted document reviews of related policies, plans, programs, and key interviews and focus groups with representatives of 21 national government departments, two (2) local government units, six (6) private sector and civil society organizations, and five (5) development agencies. Our analysis will argue that enhancements are needed in RA 10121 in order to meet the challenges of large-scale disasters. The current structure where government agencies and departments organize along DRRM thematic areas such response and relief, preparedness, prevention and mitigation, and recovery and response proved to be inefficient in coordinating response and recovery and in mobilizing resources on the ground. However, experience from various disasters has shown the Philippine government’s tendency to organize major recovery programs along development sectors such as infrastructure, livelihood, shelter, and social services, which is consistent with the concept of DRM mainstreaming. We will argue that this sectoral approach is more effective than the thematic approach to DRRM. The council-type arrangement for coordination has also been rendered inoperable by Typhoon Haiyan because the agency responsible for coordination does not have decision-making authority to mobilize action and resources of other agencies which are members of the council. Resources have been devolved to agencies responsible for each thematic area and there is no clear command and direction structure for decision-making. However, experience also shows that the Philippine government has appointed ad-hoc bodies with authority over other agencies to coordinate and mobilize action and resources in recovering from large-scale disasters. We will argue that this approach be institutionalized within the government structure to enable a more efficient and effective disaster risk reduction and management system.Keywords: risk reduction and management, recovery, governance, typhoon haiyan response and recovery
Procedia PDF Downloads 28830 CLEAN Jakarta Waste Bank Project: Alternative Solution in Urban Solid Waste Management by Community Based Total Sanitation (CBTS) Approach
Authors: Mita Sirait
Abstract:
Everyday Jakarta produces 7,000 tons of solid waste and only about 5,200 tons delivered to landfill out of the city by 720 trucks, the rest are left yet manageable, as reported by Government of Clean Sector. CLEAN Jakarta Project is aimed at empowering community to achieve healthy environment for children and families in urban slum in Semper Barat and Penjaringan sub-district of North Jakarta that consisted of 20,584 people. The project applies Community Based Total Sanitation, an approach to empowering community to achieve total hygiene and sanitation behaviour by triggering activities. As regulated by Ministry of Health, it has 5 pillars: (1) open defecation free, (2) hand-washing with soaps, (3) drinking-water treatment, (4) solid-waste management and (5) waste-water management; and 3 strategic components: 1) demand creation, 2) supply creation and 3) enabling environment. Demand creation is generated by triggering community’s reaction to their daily sanitation habits by exposing them to their surrounding where they can see faeces, waste and other environmental pollutant to stimulate disgusting, embarrassing and responsibility sense. Triggered people then challenged to commit to improving their hygiene practice such as to stop littering and start waste separation. In order to support this commitment, and for supply creation component, the project initiated waste bank with community working group. It facilitated capacity-building trainings, waste bank system formulation and meetings with local authorities to solicit land permit and waste bank decree. As it is of a general banking system, waste bank has customer service, teller, manager, legal paper and provides saving book and money transaction. In 8 months, two waste banks have established with 148 customers, 17 million rupiah cash, and about 9 million of stored recyclables. Approximately 2.5 tons of 15-35 types of recyclable are managed in both waste banks per week. On enabling environment, the project has initiated sanitation working group in community and multi sectors government level, and advocated both parties. The former is expected to promote behaviour change and monitoring in the community, while the latter is expected to support sanitation with regulations, strategies, appraisal and awards; to coordinate partnering and networking, and to replicate best practices to other areas.Keywords: urban community, waste management, Jakarta, community based total sanitation (CBTS)
Procedia PDF Downloads 29429 Control of an Outbreak of Vancomycin-Resistant Enterococci in a Tunisian Teaching Hospital
Authors: Hela Ghali, Sihem Ben Fredj, Mohamed Ben Rejeb, Sawssen Layouni, Salwa Khefacha, Lamine Dhidah, Houyem Said Laatiri
Abstract:
Background: Antimicrobial resistance is a growing threat to public health and motivates to improve prevention and control programs both at international (WHO) and national levels. Despite their low pathogenicity, vancomycin-resistant enterococci (VRE) are common nosocomial pathogens in several countries. The high potential for transmission of VRE between patients and the threat to send its resistance genes to other bacteria such as staphylococcus aureus already resistant to meticilin, justify strict control measures. Indeed, in Europe, the proportion of Enterococcus faecium responsible for invasive infections, varies from 1% to 35% in 2011 and less than 5% were resistant to vancomycin. In addition, it represents the second cause of urinary tract and wound infections and the third cause of nosocomial bacteremia in the United States. The nosocomial outbreaks of VRE have been mainly described in intensive care services, hematology-oncology and haemodialysis. An epidemic of VRE has affected our hospital and the objective of this work is to describe the measures put in place. Materials/Methods: Following the alert given by the service of plastic surgery concerning a patient carrier of VRE, a team of the prevention and healthcare security service (doctor + technician) made an investigation. A review of files was conducted to draw the synoptic table and the table of cases. Results: By contacting the microbiology laboratory, we have identified four other cases of VRE and who were hospitalized in Medical resuscitation department (2 cases, one of them was transferred to the Physical rehabilitation department), and Nephrology department (2 cases). The visit has allowed to detect several malfunctions in professional practice. A crisis cell has allowed to validate, coordinate and implement control measures following the recommendations of the Technical Center of nosocomial infections. In fact, the process was to technically isolate cases in their sector of hospitalization, to restrict the use of antibiotics, to strength measures of basic hygiene, and to make a screening by rectal swab for both cases and contacts (other patients and health staff). These measures have helped to control the situation and no other case has been reported for a month. 2 new cases have been detected in the intensive care unit after a month. However, these are short-term strategies, and other measures in the medium and long term should be taken into account in order to face similar outbreaks. Conclusion: The efforts to control the outbreak were not efficient since 2 new cases have been reported after a month. Therefore, a continuous monitoring in order to detect new cases earlier is crucial to minimize the dissemination of VRE.Keywords: hospitals, nosocomial infection, outbreak, vancomycin-resistant enterococci
Procedia PDF Downloads 30528 Jigger Flea (Tunga penetrans) Infestations and Use of Soil-Cow Dung-Ash Mixture as a Flea Control Method in Eastern Uganda
Authors: Gerald Amatre, Julius Bunny Lejju, Morgan Andama
Abstract:
Despite several interventions, jigger flea infestations continue to be reported in the Busoga sub-region in Eastern Uganda. The purpose of this study was to identify factors that expose the indigenous people to jigger flea infestations and evaluate the effectiveness of any indigenous materials used in flea control by the affected communities. Flea compositions in residences were described, factors associated with flea infestation and indigenous materials used in flea control were evaluated. Field surveys were conducted in the affected communities after obtaining preliminary information on jigger infestation from the offices of the District Health Inspectors to identify the affected villages and households. Informed consent was then sought from the local authorities and household heads to conduct the study. Focus group discussions were conducted with key district informants, namely, the District Health Inspectors, District Entomologists and representatives from the District Health Office. A GPS coordinate was taken at central point at every household enrolled. Fleas were trapped inside residences using Kilonzo traps. A Kilonzo Trap comprised a shallow pan, about three centimetres deep, filled to the brim with water. The edges of the pan were smeared with Vaseline to prevent fleas from crawling out. Traps were placed in the evening and checked every morning the following day. The trapped fleas were collected in labelled vials filled with 70% aqueous ethanol and taken to the laboratory for identification. Socio-economic and environmental data were collected. The results indicate that the commonest flea trapped in the residences was the cat flea (Ctenocephalides felis) (50%), followed by Jigger flea (Tunga penetrans) (46%) and rat flea (Xenopsylla Cheopis) (4%), respectively. The average size of residences was seven squire metres with a mean of six occupants. The residences were generally untidy; with loose dusty floors and the brick walls were not plastered. The majority of the jigger affected households were headed by peasants (86.7%) and artisans (13.3%). The household heads mainly stopped at primary school level (80%) and few at secondary school level (20%). The jigger affected households were mainly headed by peasants of low socioeconomic status. The affected community members use soil-cow dung-ash mixture to smear floors of residences as the only measure to control fleas. This method was found to be ineffective in controlling the insects. The study recommends that home improvement campaigns be continued in the affected communities to improve sanitation and hygiene in residences as one of the interventions to combat flea infestations. Other cheap, available and effective means should be identified to curb jigger flea infestations.Keywords: cow dung-soil-ash mixture, infestations, jigger flea, Tunga penetrans
Procedia PDF Downloads 13627 Developing Offshore Energy Grids in Norway as Capability Platforms
Authors: Vidar Hepsø
Abstract:
The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model
Procedia PDF Downloads 68