Search results for: high leverage points
22008 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 29222007 The Biomechanical Analysis of Pelvic Osteotomies Applied for Developmental Dysplasia of the Hip Treatment in Pediatric Patients
Authors: Suvorov Vasyl, Filipchuk Viktor
Abstract:
Developmental Dysplasia of the Hip (DDH) is a frequent pathology in pediatric orthopedist’s practice. Neglected or residual cases of DDH in walking patients are usually treated using pelvic osteotomies. Plastic changes take place in hinge points due to acetabulum reorientation during surgery. Classically described hinge points and a traditional division of pelvic osteotomies on reshaping and reorientation are currently debated. The purpose of this article was to evaluate biomechanical changes during the most commonly used pelvic osteotomies (Salter, Dega, Pemberton) for DDH treatment in pediatric patients. Methods: virtual pelvic models of 2- and 6-years old patients were created, material properties were assigned, pelvic osteotomies were simulated and biomechanical changes were evaluated using finite element analysis (FEA). Results: it was revealed that the patient's age has an impact on pelvic bones and cartilages density (in younger patients the pelvic elements are more pliable - p<0.05). Stress distribution after each of the abovementioned pelvic osteotomy was assessed in 2- and 6-years old patients’ pelvic models; hinge points were evaluated. The new term "restriction point" was introduced, which means a place where restriction of acetabular deformity correction occurs. Pelvic ligaments attachment points were mainly these restriction points. Conclusions: it was found out that there are no purely reshaping and reorientation pelvic osteotomies as previously believed; the pelvic ring acts as a unit in carrying out the applied load. Biomechanical overload of triradiate cartilage during Salter osteotomy in 2-years old patient and in 2- and 6-years old patients during Pemberton osteotomy was revealed; overload of the posterior cortical layer in the greater sciatic notch in 2-years old patient during Dega osteotomy was revealed. Level of Evidence – Level IV, prognostic.Keywords: developmental dysplasia of the hip, pelvic osteotomy, finite element analysis, hinge point, biomechanics
Procedia PDF Downloads 10022006 The Importance of Localization in Large Constraction Projects
Authors: Ali Mohammadi
Abstract:
The basis for the construction of any project is a map, a map where the surveyor can determine the coordinates of the points on the ground by using the coordinates and using the total station, projects such as dams, roads, tunnels and pipelines, if the base points are determined using GPS prepared can create challenges for the surveyor to control. First, we will examine some map projection on which the maps are designed, and a summary of their differences and the challenges that surveyors face in order to control them, because in order to build projects, we need real lengths and angles, so we have to use coordinates that provide us with the results of the calculations. We will examine some examples to understand the concept of localization so that the surveyor knows if he is facing a challenge or not and if he is faced with this challenge, how should he solve this problem.Keywords: UTM, scale factor, cartesian, traverse
Procedia PDF Downloads 8122005 [Keynote Talk]: Existence of Random Fixed Point Theorem for Contractive Mappings
Authors: D. S. Palimkar
Abstract:
Random fixed point theory has received much attention in recent years, and it is needed for the study of various classes of random equations. The study of random fixed point theorems was initiated by the Prague school of probabilistic in the 1950s. The existence and uniqueness of fixed points for the self-maps of a metric space by altering distances between the points with the use of a control function is an interesting aspect in the classical fixed point theory. In a new category of fixed point problems for a single self-map with the help of a control function that alters the distance between two points in a metric space which they called an altering distance function. In this paper, we prove the results of existence of random common fixed point and its uniqueness for a pair of random mappings under weakly contractive condition for generalizing alter distance function in polish spaces using Random Common Fixed Point Theorem for Generalized Weakly Contractions.Keywords: Polish space, random common fixed point theorem, weakly contractive mapping, altering function
Procedia PDF Downloads 27322004 Keypoint Detection Method Based on Multi-Scale Feature Fusion of Attention Mechanism
Authors: Xiaoxiao Li, Shuangcheng Jia, Qian Li
Abstract:
Keypoint detection has always been a challenge in the field of image recognition. This paper proposes a novelty keypoint detection method which is called Multi-Scale Feature Fusion Convolutional Network with Attention (MFFCNA). We verified that the multi-scale features with the attention mechanism module have better feature expression capability. The feature fusion between different scales makes the information that the network model can express more abundant, and the network is easier to converge. On our self-made street sign corner dataset, we validate the MFFCNA model with an accuracy of 97.8% and a recall of 81%, which are 5 and 8 percentage points higher than the HRNet network, respectively. On the COCO dataset, the AP is 71.9%, and the AR is 75.3%, which are 3 points and 2 points higher than HRNet, respectively. Extensive experiments show that our method has a remarkable improvement in the keypoint recognition tasks, and the recognition effect is better than the existing methods. Moreover, our method can be applied not only to keypoint detection but also to image classification and semantic segmentation with good generality.Keywords: keypoint detection, feature fusion, attention, semantic segmentation
Procedia PDF Downloads 11922003 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method
Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito
Abstract:
In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.
Procedia PDF Downloads 49322002 Impact of Facility Disruptions on Demand Allocation Strategies in Reliable Facility Location Models
Authors: Abdulrahman R. Alenezi
Abstract:
This research investigates the effects of facility disruptions on-demand allocation within the context of the Reliable Facility Location Problem (RFLP). We explore two distinct scenarios: one where primary and backup facilities can fail simultaneously and another where such simultaneous failures are not possible. The RFLP model is tailored to reflect these scenarios, incorporating different approaches to transportation cost calculations. Utilizing a Lagrange relaxation method, the model achieves high efficiency, yielding an average optimality gap of 0.1% within 12.2 seconds of CPU time. Findings indicate that primary facilities are typically sited closer to demand points than backup facilities. In cases where simultaneous failures are prohibited, demand points are predominantly assigned to the nearest available facility. Conversely, in scenarios permitting simultaneous failures, demand allocation may prioritize factors beyond mere proximity, such as failure rates. This study highlights the critical influence of facility reliability on strategic location decisions, providing insights for enhancing resilience in supply chain networks.Keywords: reliable supply chain network, facility location problem, reliable facility location model, LaGrange relaxation
Procedia PDF Downloads 2722001 Background Check System for Turkish IT Companies
Authors: Arzu Baloglu, Ugur Kaplancali
Abstract:
This paper focuses on Background Check Systems and Pre-Employment Screening. In our study, we attempted to make an online background checking site that will help employers when hiring employees. Our site has two types of users which are free and powered user. Free users are the employees and powered users are the employers which will hire employers. The database of the site will contain all the information about the employees and employers which are registered in the system so the employers can make a search based on their searching criteria to find the suitable employee for the job. The web site also has a comments and points system. The current employer can make comments to his/her employees and can also give them points. The comments will be shown on employee’s profile, so; when an employer searches for an employee he/she can check the points and comments of the employee to see whether he or she is capable of the job or not. The employers can also follow some employees if they desire. This paper has been designed and implemented with using ASP.NET, C# and JavaScript. The outputs have a user friendly interface. The interface also aimed to provide the useful information for Turkish Technology Companies.Keywords: background, checking, verification, human resources, online
Procedia PDF Downloads 19822000 Effects of Rations with High Amount of Crude Fiber on Rumen Fermentation in Suckler Cows
Authors: H. Scholz, P. Kuehne, G. Heckenberger
Abstract:
Problems during the calving period (December until May) often are results in a high body condition score (BCS) at this time. At the end of the grazing period (frequently after early weaning), however, an increase of BCS can often be observed under German conditions. In the last eight weeks before calving, the body condition should be reduced or at least not increased. Rations with a higher amount of crude fiber can be used (rations with straw or late mowed grass silage). Fermentative digestion of fiber is slow and incomplete; that’s why the fermentative process in the rumen can be reduced over a long feeding time. Viewed in this context, feed intake of suckler cows (8 weeks before calving) in different rations and fermentation in the rumen should be checked by taking rumen fluid. Eight suckler cows (Charolais) were feeding a Total Mixed Ration (TMR) in the last eight weeks before calving and grass silage after calving. By the addition of straw (30 % [TMR1] vs. 60 % [TMR2] of dry matter) was varied the amount of crude fiber in the TMR (grass silage, straw, mineral) before calving. After calving of the cow's grass, silage [GS] was fed ad libitum, and the last measurement of rumen fluid took place on the pasture [PS]. Rumen fluid, plasma, body weight, and backfat thickness were collected. Rumen fluid pH was assessed using an electronic pH meter. Volatile fatty acids (VFA), sedimentation, methylene-blue, and amount of infusorians were measured. From these 4 parameters, an “index of rumen fermentation” [IRF] in the rumen was formed. Fixed effects of treatment (TMR1, TMR2, GS, and PS) and a number of lactations (3-7 lactations) were analyzed by ANOVA using SPSS Version 25.0 (significant by p ≤ 5 %). Rumen fluid pH was significantly influenced by variants (TMR 1 by 6.6; TMR 2 by 6.9; GS by 6.6 and PS by 6.9) but was not affected by other effects. The IRF showed disturbed fermentation in the rumen by feeding the TMR 1+2 with a high amount of crude fiber (Score: > 10.0 points) and a very good environment for fermentation during grazing the pasture (Score: 6.9 points). Furthermore, significant differences were found for VFA, methylene blue, and the number of infusorians. The use of rations with a high amount of crude fiber from weaning to calving may cause deviations from undisturbed fermentation in the rumen and adversely affect the utilization of the feed in the rumen.Keywords: rumen fermentation, suckler cow, digestibility organic matter, crude fiber
Procedia PDF Downloads 14421999 An Intelligent Watch-Over System Using an IoT Device, for Elderly People Living by Themselves
Authors: Hideo Suzuki, Yuya Kiyonobu, Kotaro Matsushita, Masaki Hanada, Rie Suzuki, Noriko Niijima, Noriko Uosaki, Tadao Nakamura
Abstract:
People often worry about their elderly family members who are living by themselves or staying alone somewhere. An intelligent watch-over system for such elderly people, using a Raspberry Pi IoT device, has been newly developed to monitor those who live or stay separately from their families and alert them if a problem occurs. The system consists of motion sensors and temperature-humidity combined sensors that are located at seven points within an elderly person's home. The intelligent algorithms of the system detect signs and the possibility of unhealthy situations arising for the elderly relative; e.g., an unusually long bathing time, or a visit to a restroom, too high a room temperature, etc., by using data cached by the sensors above, at seven points within their house. The system gives more consideration to the elderly person's privacy, by using the sensors above, instead of using cameras and microphones placed around the house. The system invented and described here, can send a Twitter direct message to designated family members when an elderly relative is possibly in an unhealthy condition. Thus the system helps decrease family members' anxieties regarding their elderly relatives and increases their sense of security.Keywords: elderly person, IoT device, Raspberry Pi, watch-over system
Procedia PDF Downloads 22421998 Turning Points in the Development of Translator Training in the West from the 1980s to the Present
Authors: B. Sayaheen
Abstract:
The translator’s competence is one of the topics that has received a great deal of research in the field of translation studies because such competencies are still debatable and not yet agreed upon. Besides, scholars tackle this topic from different points of view. Approaches to teaching these competencies have gone through some developments. This paper aims at investigating these developments, exploring the major turning points and shifts in the developments of teaching methods in translator training. The significance of these turning points and the external or internal causes will also be discussed. Based on the past and present status of teaching approaches in translator training, this paper tries to predict the future of these approaches. This paper is mainly concerned with developments of teaching approaches in the West since the 1980s to the present. The reason behind choosing this specific period is not because translator training started in the 1980s but because most criticism of the teacher-centered approach started at that time. The implications of this research stem from the fact that it identifies the turning points and the causes that led teachers to adopt student-centered approaches rather than teacher-centered approaches and then to incorporate technology and the Internet in translator training. These reasons were classified as external or internal reasons. Translation programs in the West and in other cultures can benefit from this study. Translation programs in the West can notice that teaching translation is geared toward incorporating more technologies. If these programs already use technology and the Internet to teach translation, they might benefit from the assumed future direction of teaching translation. On the other hand, some non-Western countries, and to be specific some professors, are still applying the teacher-centered approach. Moreover, these programs should include technology and the Internet in their teaching approaches to meet the drastic changes in the translation process, which seems to rely more on software and technologies to accomplish the translator’s tasks. Finally, translator training has borrowed many of its approaches from other disciplines, mainly language teaching. The teaching approaches in translator training have gone through some developments, from teacher-centered to student-centered and then toward the integration of technologies and the Internet. Both internal and external causes have played a crucial role in these developments. These borrowed approaches should be comprehensively evaluated in order to see if they achieve the goals of translator training. Such evaluation may lead us to come up with new teaching approaches developed specifically for translator training. While considering these methods and designing new approaches, we need to keep an eye on the future needs of the market.Keywords: turning points, developments, translator training, market, The West
Procedia PDF Downloads 11421997 Empirical Study on Causes of Project Delays
Authors: Khan Farhan Rafat, Riaz Ahmed
Abstract:
Renowned offshore organizations are drifting towards collaborative exertion to win and implement international projects for business gains. However, devoid of financial constraints, with the availability of skilled professionals, and despite improved project management practices through state-of-the-art tools and techniques, project delays have become a norm these days. This situation calls for exploring the factor(s) affecting the bonding between project management performance and project success. In the context of the well-known 3M’s of project management (that is, manpower, machinery, and materials), machinery and materials are dependent upon manpower. Because the body of knowledge inveterate on the influence of national culture on men, hence, the realization of the impact on the link between project management performance and project success need to be investigated in detail to arrive at the possible cause(s) of project delays. This research initiative was, therefore, undertaken to fill the research gap. The unit of analysis for the proposed research excretion was the individuals who had worked on skyscraper construction projects. In reverent studies, project management is best described using construction examples. It is due to this reason that the project oriented city of Dubai was chosen to reconnoiter on causes of project delays. A structured questionnaire survey was disseminated online with the courtesy of the Project Management Institute local chapter to carry out the cross-sectional study. The Construction Industry Institute, Austin, of the United States of America along with 23 high-rise builders in Dubai were also contacted by email requesting for their contribution to the study and providing them with the online link to the survey questionnaire. The reliability of the instrument was warranted using Cronbach’s alpha coefficient of 0.70. The appropriateness of sampling adequacy and homogeneity in variance was ensured by keeping Kaiser–Meyer–Olkin (KMO) and Bartlett’s test of sphericity in the range ≥ 0.60 and < 0.05, respectively. Factor analysis was used to verify construct validity. During exploratory factor analysis, all items were loaded using a threshold of 0.4. Four hundred and seventeen respondents, including members from top management, project managers, and project staff, contributed to the study. The link between project management performance and project success was significant at 0.01 level (2-tailed), and 0.05 level (2-tailed) for Pearson’s correlation. Before initiating the moderator analysis test for linearity, multicollinearity, outliers, leverage points and influential cases, test for homoscedasticity and normality were carried out which are prerequisites for conducting moderator review. The moderator analysis, using a macro named PROCESS, was performed to verify the hypothesis that national culture has an influence on the said link. The empirical findings, when compared with Hofstede's results, showed high power distance as the cause of construction project delays in Dubai. The research outcome calls for the project sponsors and top management to reshape their project management strategy and allow for low power distance between management and project personnel for timely completion of projects.Keywords: causes of construction project delays, construction industry, construction management, power distance
Procedia PDF Downloads 21321996 The Determinants of Financial Ratio Disclosures and Quality: Evidence from an Emerging Market
Authors: Ben Kwame Agyei-Mensah
Abstract:
This study investigated the influence of firm-specific characteristics which include proportion of Non-Executive Directors, ownership concentration, firm size, profitability, debt equity ratio, liquidity and leverage on the extent and quality of financial ratios disclosed by firms listed on the Ghana Stock Exchange. The research was conducted through detailed analysis of the 2012 financial statements of the listed firms. Descriptive analysis was performed to provide the background statistics of the variables examined. This was followed by regression analysis which forms the main data analysis. The results of the extent of financial ratio disclosure level, mean of 62.78%, indicate that most of the firms listed on the Ghana Stock Exchange did not overwhelmingly disclose such ratios in their annual reports. The results of the low quality of financial ratio disclosure mean of 6.64% indicate that the disclosures failed woefully to meet the International Accounting Standards Board's qualitative characteristics of relevance, reliability, comparability and understandability. The results of the multiple regression analysis show that leverage (gearing ratio) and return on investment (dividend per share) are associated on a statistically significant level as far as the extent of financial ratio disclosure is concerned. Board ownership concentration and proportion of (independent) non-executive directors, on the other hand were found to be statistically associated with the quality of financial ratio disclosed. There is a significant negative relationship between ownership concentration and the quality of financial ratio disclosure. This means that under a higher level of ownership concentration less quality financial ratios are disclosed. The findings also show that there is a significant positive relationship between board composition (proportion of non-executive directors) and the quality of financial ratio disclosure.Keywords: voluntary disclosure, firm-specific characteristics, financial reporting, financial ratio disclosure, Ghana stock exchange
Procedia PDF Downloads 59321995 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector
Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud
Abstract:
The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity
Procedia PDF Downloads 33221994 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 16721993 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation
Authors: Kiwon Yeom
Abstract:
Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.Keywords: change point, discontinuity, teleoperation, abrupt variation
Procedia PDF Downloads 16721992 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 8221991 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 4221990 Efficient Corporate Image as a Strategy for Enhancing Profitability in Hotels
Authors: Lucila T. Magalong
Abstract:
The hotel industry has been using their corporate image and reputation to maintain service quality, customer satisfaction, and customer loyalty and to leverage themselves against competitors and facilitate their growth strategies. With the increasing pressure to perform, hotels have even created hybrid service strategy to fight in the niche markets across pricing and level-off service parameters.Keywords: corporate image, hotel industry, service quality, customer expectations
Procedia PDF Downloads 46521989 Approximating Fixed Points by a Two-Step Iterative Algorithm
Authors: Safeer Hussain Khan
Abstract:
In this paper, we introduce a two-step iterative algorithm to prove a strong convergence result for approximating common fixed points of three contractive-like operators. Our algorithm basically generalizes an existing algorithm..Our iterative algorithm also contains two famous iterative algorithms: Mann iterative algorithm and Ishikawa iterative algorithm. Thus our result generalizes the corresponding results proved for the above three iterative algorithms to a class of more general operators. At the end, we remark that nothing prevents us to extend our result to the case of the iterative algorithm with error terms.Keywords: contractive-like operator, iterative algorithm, fixed point, strong convergence
Procedia PDF Downloads 55021988 Vertical Accuracy Evaluation of Indian National DEM (CartoDEM v3) Using Dual Frequency GNSS Derived Ground Control Points for Lower Tapi Basin, Western India
Authors: Jaypalsinh B. Parmar, Pintu Nakrani, Ashish Chaurasia
Abstract:
Digital Elevation Model (DEM) is considered as an important data in GIS-based terrain analysis for many applications and assessment of processes such as environmental and climate change studies, hydrologic modelling, etc. Vertical accuracy of DEM having geographically dynamic nature depends on different parameters which affect the model simulation outcomes. Vertical accuracy assessment in Indian landscape especially in low-lying coastal urban terrain such as lower Tapi Basin is very limited. In the present study, attempt has been made to evaluate the vertical accuracy of 30m resolution open source Indian National Cartosat-1 DEM v3 for Lower Tapi Basin (LTB) from western India. The extensive field investigation is carried out using stratified random fast static DGPS survey in the entire study region, and 117 high accuracy ground control points (GCPs) have been obtained. The above open source DEM was compared with obtained GCPs, and different statistical attributes were envisaged, and vertical error histograms were also evaluated.Keywords: CartoDEM, Digital Elevation Model, GPS, lower Tapi basin
Procedia PDF Downloads 35821987 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers
Authors: Marcus Klein, Martina GrießBach, Richard Kupke
Abstract:
The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology
Procedia PDF Downloads 30721986 Closed Urban Block versus Open Housing Estates Structures: Sustainability Surveys in Brno, Czech Republic
Authors: M. Wittmann, G. Kopacik, A. Leitmannova
Abstract:
A prominent place in the spatial arrangement of Czech as well as other post-socialist, Central European cities belongs to 19th century closed urban blocks and the open concrete panel housing estates which were erected during the socialism era in the second half of 20th century. The characteristics of these two fundamentally diverse types of residential structures have, as we suppose, a different impact on the sustainable development of the urban area. The characteristics of these residential structures may influence the ecological stability of the area, its hygienic qualities, the intensity and way of using by various social groups, and also, e.g., the prices of real estates. These and many other phenomena indicate the environmental, social and economic sustainability of the urban area. The proposed research methodology assessed specific indicators of sustainability within a range from 0 to 10 points. 5 points correspond to the general standard in the area, 0 points indicates degradation, and 10 points indicate the highest contribution to sustainable development. The survey results are reflected in the overall sustainability index and in the residents’ satisfaction index. The paper analyses the residential structures in the Central European city of Brno, Czech Republic. The case studies of the urban blocks near the city centre and of the housing estate Brno - Vinohrady are compared. The results imply that a considerable positive impact on the sustainable development of the area should be ascribed to the closed urban blocks near the city centre.Keywords: City of Brno, closed urban block, open housing estate, urban structure
Procedia PDF Downloads 17921985 Lyapunov Functions for Extended Ross Model
Authors: Rahele Mosleh
Abstract:
This paper gives a survey of results on global stability of extended Ross model for malaria by constructing some elegant Lyapunov functions for two cases of epidemic, including disease-free and endemic occasions. The model is a nonlinear seven-dimensional system of ordinary differential equations that simulates this phenomenon in a more realistic fashion. We discuss the existence of positive disease-free and endemic equilibrium points of the model. It is stated that extended Ross model possesses invariant solutions for human and mosquito in a specific domain of the system.Keywords: global stability, invariant solutions, Lyapunov function, stationary points
Procedia PDF Downloads 16521984 Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant
Authors: Samsher Gautam, Apoorva Roy, Bhuvan Aggarwal
Abstract:
Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.Keywords: [bmim][PF₆] as absorbent, carbon dioxide as refrigerant, exergy analysis, Peng-Robinson equations of state, vapor absorption refrigeration
Procedia PDF Downloads 28821983 Hybrid Model for Measuring the Hedge Strategy in Exchange Risk in Information Technology Industry
Authors: Yi-Hsien Wang, Fu-Ju Yang, Hwa-Rong Shen, Rui-Lin Tseng
Abstract:
The business is notably related to the market risk according to the increase of liberalization of financial markets. Hence, the company usually utilized high financial leverage of derivatives to hedge the risk. When the company choose different hedging instruments to face a variety of exchange rate risk, we employ the Multinomial Logistic-AHP to analyze the impact of various derivatives. Hence, the research summarized the literature on relevant factors affecting managers selected exchange rate hedging instruments, using Multinomial Logistic Model and and further integrate AHP. Using Experts’ Questionnaires can test multi-level selection and hedging effect of different hedging instruments in order to calculate the hedging instruments and the multi-level factors of weights to understand the gap between the empirical results and practical operation. Finally, the Multinomial Logistic-AHP Model will sort the weights to analyze. The research findings can be a basis reference for investors in decision-making.Keywords: exchange rate risk, derivatives, hedge, multinomial logistic-AHP
Procedia PDF Downloads 44221982 Improvement of the 3D Finite Element Analysis of High Voltage Power Transformer Defects in Time Domain
Authors: M. Rashid Hussain, Shady S. Refaat
Abstract:
The high voltage power transformer is the most essential part of the electrical power utilities. Reliability on the transformers is the utmost concern, and any failure of the transformers can lead to catastrophic losses in electric power utility. The causes of transformer failure include insulation failure by partial discharge, core and tank failure, cooling unit failure, current transformer failure, etc. For the study of power transformer defects, finite element analysis (FEA) can provide valuable information on the severity of defects. FEA provides a more accurate representation of complex geometries because they consider thermal, electrical, and environmental influences on the insulation models to obtain basic characteristics of the insulation system during normal and partial discharge conditions. The purpose of this paper is the time domain analysis of defects 3D model of high voltage power transformer using FEA to study the electric field distribution at different points on the defects.Keywords: power transformer, finite element analysis, dielectric response, partial discharge, insulation
Procedia PDF Downloads 15821981 Factor Study Affecting Visual Awareness on Dynamic Object Monitoring
Authors: Terry Liang Khin Teo, Sun Woh Lye, Kai Lun Brendon Goh
Abstract:
As applied to dynamic monitoring situations, the prevailing approach to situation awareness (SA) assumes that the relevant areas of interest (AOI) be perceived before that information can be processed further to affect decision-making and, thereafter, action. It is not entirely clear whether this is the case. This study seeks to investigate the monitoring of dynamic objects through matching eye fixations with the relevant AOIs in boundary-crossing scenarios. By this definition, a match is where a fixation is registered on the AOI. While many factors may affect monitoring characteristics, traffic simulations were designed in this study to explore two factors, namely: the number of inbounds/outbound traffic transfers and the number of entry and/or exit points in a radar monitoring sector. These two factors were graded into five levels of difficulty ranging from low to high traffic flow numbers. Combined permutation in terms of levels of difficulty of these two factors yielded a total of thirty scenarios. Through this, results showed that changes in the traffic flow numbers on transfer resulted in greater variations having match limits ranging from 29%-100%, as compared to the number of sector entry/exit points of range limit from 80%-100%. The subsequent analysis is able to determine the type and combination of traffic scenarios where imperfect matching is likely to occur.Keywords: air traffic simulation, eye-tracking, visual monitoring, focus attention
Procedia PDF Downloads 5721980 Study of Interplanetary Transfer Trajectories via Vicinity of Libration Points
Authors: Zhe Xu, Jian Li, Lvping Li, Zezheng Dong
Abstract:
This work is to study an optimized transfer strategy of connecting Earth and Mars via the vicinity of libration points, which have been playing an increasingly important role in trajectory designing on a deep space mission, and can be used as an effective alternative solution for Earth-Mars direct transfer mission in some unusual cases. The use of vicinity of libration points of the sun-planet body system is becoming potential gateways for future interplanetary transfer missions. By adding fuel to cargo spaceships located in spaceports, the interplanetary round-trip exploration shuttle mission of such a system facility can also be a reusable transportation system. In addition, in some cases, when the S/C cruising through invariant manifolds, it can also save a large amount of fuel. Therefore, it is necessary to make an effort on looking for efficient transfer strategies using variant manifold about libration points. It was found that Earth L1/L2 Halo/Lyapunov orbits and Mars L2/L1 Halo/Lyapunov orbits could be connected with reasonable fuel consumption and flight duration with appropriate design. In the paper, the halo hopping method and coplanar circular method are briefly introduced. The former used differential corrections to systematically generate low ΔV transfer trajectories between interplanetary manifolds, while the latter discussed escape and capture trajectories to and from Halo orbits by using impulsive maneuvers at periapsis of the manifolds about libration points. In the following, designs of transfer strategies of the two methods are shown here. A comparative performance analysis of interplanetary transfer strategies of the two methods is carried out accordingly. Comparison of strategies is based on two main criteria: the total fuel consumption required to perform the transfer and the time of flight, as mentioned above. The numeric results showed that the coplanar circular method procedure has certain advantages in cost or duration. Finally, optimized transfer strategy with engineering constraints is searched out and examined to be an effective alternative solution for a given direct transfer mission. This paper investigated main methods and gave out an optimized solution in interplanetary transfer via the vicinity of libration points. Although most of Earth-Mars mission planners prefer to build up a direct transfer strategy for the mission due to its advantage in relatively short time of flight, the strategies given in the paper could still be regard as effective alternative solutions since the advantages mentioned above and longer departure window than direct transfer.Keywords: circular restricted three-body problem, halo/Lyapunov orbit, invariant manifolds, libration points
Procedia PDF Downloads 24421979 Health Behaviours of Patients Qualified for Bariatric Surgery
Authors: A. Gazdzinska, P. Jagielski, E. Kaniewska, S. P. Gazdzinski, M. Wylezol
Abstract:
Background: In the multi-factor etiology of obesity, an increasing degree of importance is attributed to behavioral factors. Lifestyle and health-oriented behaviors heavily influence the treatment of multiple diseases, including obesity. However, only a few studies evaluated health-related behaviors exhibited by patients qualified for bariatric surgery. None of them was performed in Polish population. Aim: Assessment of health behaviors of obese patients according to the degree of mood disorders. Method: The study involved 93 patients (66 females) who were qualified for bariatric surgery in the Department of Surgery of the Military Institute of Aviation Medicine in Warsaw. Diagnostic instrument was the Juczynski’s Inventory of Health Behavior (HBI), which evaluates health behavior in four categories, i.e. proper nutrition habits (PNH), preventive behavior (PH), health practices (HP) and positive mental attitude (PMA). The average HBI falls in the range between 24 and 120 points, for each category of health behaviors fall between 1 and 5 (higher score means higher severity declared healthy behaviors). The depressive symptoms in patients were assessed with Beck Depression Inventory (BDI). All analyses were conducted using STATISTICA 12. Results: The average age was 44.2 ± 11.5 years, mean BMI was 44.3 ± 10.5 kg/m2 and 46.8 ± 7.6 kg/m2, in females and males respectively. According to BDI, 32% patients had mild level of depression, 10% moderate and 14% severe depression. BDI scores were not different between females and males. Low results with regard to the health behaviors declared were obtained by 35.5 % of patients, medium by 44.0%, while high ones by only 20.5%. On average, patients gained 3.28 points in PNH, 3.37 points in PH, 3.29 points in HP, while 3.42 in the PMA category, showing average intensity of these behaviors. These health behaviors were practiced significantly more often by women (p = 0.04). The average HBI was 80.2; with average score of 81.5 for females and 76.6 for males, respectively (p = 0.03). Women were better in the PNH category (p = 0.02). A positive correlation was found between age and all categories of health behaviors, in particular PNH (R = 0.38; p = 0.001), PH (R = 0.26; p = 0.01), HP (R = 0.27; p = 0.01) and PMA (R = 0.24; p = 0.02), independent of gender. The severity of depression had a significant impact only on the behaviors associated with proper eating habits, which saw a negative correlation between BDI scores and the PNH (R = -0.21; p = 0.04). Conclusions: Majority of morbidly obese patients qualified for bariatric surgery obtained low to average scores in health behavior questionnaire. However, these results are similar in comparison with the Polish adult population. In accordance to these results, it seems that healthy behaviors, among them eating behaviors, do not appear to be a cause of obesity epidemic or they might be acquired when the disease is already underway. Female gender and age had a positive effect, and depression had a negative effect on the level of health behaviors among patients qualified for bariatric surgery.Keywords: depression, habits, health behaviours, obesity
Procedia PDF Downloads 286