Search results for: finite element modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7093

Search results for: finite element modeling

1093 Evaluation of Effectiveness of Three Common Equine Thrush Treatments

Authors: A. S. Strait, J. A. Bryk-Lucy, L. M. Ritchie

Abstract:

Thrush is a common disease of ungulates primarily affecting the frog and sulci, caused by the anaerobic bacteria Fusobacterium necrophorum. Thrush accounts for approximately 45.0% of hoof disorders in horses. Prevention and treatment of thrush are essential to prevent horses from developing severe infections and becoming lame. Proper knowledge of hoof care and thrush treatments is crucial to avoid financial costs, unsoundness and lost training time. Research on the effectiveness of numerous commercial and homemade thrush treatments is limited in the equine industry. The objective of this study was to compare the effectiveness of three common thrush treatments for horses: weekly application of Thrush Buster, daily dilute bleach solution spray, or Metronidazole pastes every other day. Cases of thrush diagnosed by a veterinarian or veterinarian-trained researcher were given a score, from 0 to 4, based on the severity of the thrush in each hoof (n=59) and randomly assigned a treatment. Cases were rescored each week of the three-week treatment, and the final and initial scores were compared to determine effectiveness. The thrush treatments were compared with Thrush Buster as the reference at a significance level of α=.05. Binomial Logistic Regression Modeling was performed, finding that the odds of a hoof treated with Metronidazole to be thrush-free was 6.1 times greater than a hoof treated with Thrush Buster (p=0.001), while the odds of a hoof that was treated with bleach to be thrush-free was only 0.97 times greater than a hoof treated with Thrush Buster (p=0.970), after adjustment for treatment week. Of the three treatments utilized in this study, Metronidazole paste applied to the affected areas every other day was the most effective treatment for thrush in horses. There are many other thrush remedies available, and further research is warranted to determine the efficacy of additional treatment options.

Keywords: fusobacterium necrophorum, thrush, equine, horse, lameness

Procedia PDF Downloads 133
1092 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second –95,3%.

Keywords: bass model, generalized bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States

Procedia PDF Downloads 335
1091 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 117
1090 Spatiotemporal Modeling of Under-Five Mortality and Associated Risk Factors in Ethiopia

Authors: Melkamu A. Zeru, Aweke A. Mitiku, Endashaw Amuka

Abstract:

Background: Under-five mortality is the likelihood that a baby will pass away before turning exactly 5 years old, represented as a percentage per 1,000 live births. Exploring the spatial distribution and identifying the temporal pattern is important to reducing under-five child mortality globally, including in Ethiopia. Thus, this study aimed to identify the risk factors of under-five mortality and the spatiotemporal variation in Ethiopian administrative zones. Method: This study used the 2000-2016 Ethiopian Demographic and Health Survey (EDHS) data, which were collected using a two-stage sampling method. A total of 43,029 (10,873 in 2000, 9,861 in 2005, 11,654 in 2011, and 10,641 in 2016) weighted sample under-five child mortality was used. The space-time dynamic model was employed to account for spatial and time effects in 65 administrative zones in Ethiopia. Results: From the result of a general nesting spatial-temporal dynamic model, there was a significant space-time interaction effect [γ = -0.1444, 95 % CI (-0.6680, -0.1355)] for under-five mortality. The increase in the percentages of mothers illiteracy [𝛽 = 0.4501, 95% CI (0.2442, 0.6559)], not vaccinated[𝛽= 0.7681, 95% CI (0.5683, 0.9678)], unimproved water[𝛽= 0.5801, CI (0.3793, 0.7808)] were increased death rates for under five children while increased percentage of contraceptive use [𝛽= -0.6609, 95% CI (-0.8636, -0.4582)] and ANC visit > 4 times [𝛽= -0.1585, 95% CI(-0.1812, -0.1357)] were contributed to the decreased under-five mortality rate at the zone in Ethiopia. Conclusions: Even though the mortality rate for children under five has decreased over time, still there is still higher in different zones of Ethiopia. There exists spatial and temporal variation in under-five mortality among zones. Therefore, it is very important to consider spatial neighbourhoods and temporal context when aiming to avoid under-five mortality.

Keywords: under-five children mortality, space-time dynamic, spatiotemporal, Ethiopia

Procedia PDF Downloads 17
1089 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 377
1088 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education

Authors: Gülay Dalgıç, Gildis Tachir

Abstract:

Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.

Keywords: education modeling, architecture education, design education, design process

Procedia PDF Downloads 125
1087 Human Resource Information System: Role in HRM Practices and Organizational Performance

Authors: Ejaz Ali M. Phil

Abstract:

Enterprise Resource Planning (ERP) systems are playing a vital role in effective management of business functions in large and complex organizations. Human Resource Information System (HRIS) is a core module of ERP, providing concrete solutions to implement Human Resource Management (HRM) Practices in an innovative and efficient manner. Over the last decade, there has been considerable increase in the studies on HRIS. Nevertheless, previous studies relatively lacked to examine the moderating role of HRIS in performing HRM practices that may affect the firms’ performance. The current study was carried out to examine the impact of HRM practices (training, performance appraisal) on perceived organizational performance, with moderating role of HRIS, where the system is in place. The study based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) Theories, advocating that strengthening of human capital enables an organization to achieve and sustain competitive advantage which leads to improved organizational performance. Data were collected through structured questionnaire based upon adopted instruments after establishing reliability and validity. The structural equation modeling (SEM) were used to assess the model fitness, hypotheses testing and to establish validity of the instruments through Confirmatory Factor Analysis (CFA). A total 220 employees of 25 firms in corporate sector were sampled through non-probability sampling technique. Path analysis revealing that HRM practices and HRIS have significant positive impact on organizational performance. The results further showed that the HRIS moderated the relationships between training, performance appraisal and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are discussed.

Keywords: enterprise resource planning, human resource, information system, human capital

Procedia PDF Downloads 379
1086 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method

Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park

Abstract:

3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.

Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)

Procedia PDF Downloads 217
1085 The Antecedents of Internet Addiction toward Smartphone Usage

Authors: Pui-Lai To, Chechen Liao, Hen-Yi Huang

Abstract:

Twenty years after Internet development, scholars have started to identify the negative impacts brought by the Internet. Overuse of Internet could develop Internet dependency and in turn cause addiction behavior. Therefore understanding the phenomenon of Internet addiction is important. With the joint efforts of experts and scholars, Internet addiction has been officially listed as a symptom that affects public health, and the diagnosis, causes and treatment of the symptom have also been explored. On the other hand, in the area of smartphone Internet usage, most studies are still focusing on the motivation factors of smartphone usage. Not much research has been done on smartphone Internet addiction. In view of the increasing adoption of smartphones, this paper is intended to find out whether smartphone Internet addiction exists in modern society or not. This study adopted the research methodology of online survey targeting users with smartphone Internet experience. A total of 434 effective samples were recovered. In terms of data analysis, Partial Least Square (PLS) in Structural Equation Modeling (SEM) is used for sample analysis and research model testing. Software chosen for statistical analysis is SPSS 20.0 for windows and SmartPLS 2.0. The research result successfully proved that smartphone users who access Internet service via smartphone could also develop smartphone Internet addiction. Factors including flow experience, depression, virtual social support, smartphone Internet affinity and maladaptive cognition all have significant and positive influence on smartphone Internet addiction. In the scenario of smartphone Internet use, descriptive norm has a positive and significant influence on perceived playfulness, while perceived playfulness also has a significant and positive influence on flow experience. Depression, on the other hand, is negatively influenced by actual social support and positive influenced by the virtual social support.

Keywords: internet addiction, smartphone usage, social support, perceived playfulness

Procedia PDF Downloads 226
1084 Comparative Study of Flood Plain Protection Zone Determination Methodologies in Colombia, Spain and Canada

Authors: P. Chang, C. Lopez, C. Burbano

Abstract:

Flood protection zones are riparian buffers that are formed to manage and mitigate the impact of flooding, and in turn, protect local populations. The purpose of this study was to evaluate the Guía Técnica de Criterios para el Acotamiento de las Rondas Hídricas in Colombia against international regulations in Canada and Spain, in order to determine its limitations and contribute to its improvement. The need to establish a specific corridor that allows for the dynamic development of a river is clear; however, limitations present in the Colombian Technical Guide are identified. The study shows that international regulations provide similar concepts as used in Colombia, but additionally integrate aspects such as regionalization that allows for a better characterization of the channel way, and incorporate the frequency of flooding and its probability of occurrence in the concept of risk when determining the protection zone. The case study analyzed in Dosquebradas - Risaralda aimed at comparing the application of the different standards through hydraulic modeling. It highlights that the current Colombian standard does not offer sufficient details in its implementation phase, which leads to a false sense of security related to inaccuracy and lack of data. Furthermore, the study demonstrates how the Colombian norm is ill-adapted to the conditions of Dosquebradas typical of the Andes region, both in the social and hydraulic aspects, and does not reduce the risk, nor does it improve the protection of the population. Our study considers it pertinent to include risk estimation as an integral part of the methodology when establishing protect flood zone, considering the particularity of water systems, as they are characterized by an heterogeneous natural dynamic behavior.

Keywords: environmental corridor, flood zone determination, hydraulic domain, legislation flood protection zone

Procedia PDF Downloads 98
1083 Numerical Modeling of Film Cooling of the Surface at Non-Uniform Heat Flux Distributions on the Wall

Authors: M. V. Bartashevich

Abstract:

The problem of heat transfer at thin laminar liquid film is solved numerically. A thin film of liquid flows down an inclined surface under conditions of variable heat flux on the wall. The use of thin films of liquid allows to create the effective technologies for cooling surfaces. However, it is important to investigate the most suitable cooling regimes from a safety point of view, in order, for example, to avoid overheating caused by the ruptures of the liquid film, and also to study the most effective cooling regimes depending on the character of the distribution of the heat flux on the wall, as well as the character of the blowing of the film surface, i.e., the external shear stress on its surface. In the statement of the problem on the film surface, the heat transfer coefficient between the liquid and gas is set, as well as a variable external shear stress - the intensity of blowing. It is shown that the combination of these factors - the degree of uniformity of the distribution of heat flux on the wall and the intensity of blowing, affects the efficiency of heat transfer. In this case, with an increase in the intensity of blowing, the cooling efficiency increases, reaching a maximum, and then decreases. It is also shown that the more uniform the heating of the wall, the more efficient the heat sink. A separate study was made for the flow regime along the horizontal surface when the liquid film moves solely due to external stress influence. For this mode, the analytical solution is used for the temperature at the entrance region for further numerical calculations downstream. Also the influence of the degree of uniformity of the heat flux distribution on the wall and the intensity of blowing of the film surface on the heat transfer efficiency was also studied. This work was carried out at the Kutateladze Institute of Thermophysics SB RAS (Russia) and supported by FASO Russia.

Keywords: Heat Flux, Heat Transfer Enhancement, External Blowing, Thin Liquid Film

Procedia PDF Downloads 127
1082 Geometrical Analysis of an Atheroma Plaque in Left Anterior Descending Coronary Artery

Authors: Sohrab Jafarpour, Hamed Farokhi, Mohammad Rahmati, Alireza Gholipour

Abstract:

In the current study, a nonlinear fluid-structure interaction (FSI) biomechanical model of atherosclerosis in the left anterior descending (LAD) coronary artery is developed to perform a detailed sensitivity analysis of the geometrical features of an atheroma plaque. In the development of the numerical model, first, a 3D geometry of the diseased artery is developed based on patient-specific dimensions obtained from the experimental studies. The geometry includes four influential geometric characteristics: stenosis ratio, plaque shoulder-length, fibrous cap thickness, and eccentricity intensity. Then, a suitable strain energy density function (SEDF) is proposed based on the detailed material stability analysis to accurately model the hyperelasticity of the arterial walls. The time-varying inlet velocity and outlet pressure profiles are adopted from experimental measurements to incorporate the pulsatile nature of the blood flow. In addition, a computationally efficient type of structural boundary condition is imposed on the arterial walls. Finally, a non-Newtonian viscosity model is implemented to model the shear-thinning behaviour of the blood flow. According to the results, the structural responses in terms of the maximum principal stress (MPS) are affected more compared to the fluid responses in terms of wall shear stress (WSS) as the geometrical characteristics are varying. The extent of these changes is critical in the vulnerability assessment of an atheroma plaque.

Keywords: atherosclerosis, fluid-Structure interaction modeling, material stability analysis, and nonlinear biomechanics

Procedia PDF Downloads 74
1081 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 380
1080 Antecedents of Regret and Satisfaction in Electronic Commerce

Authors: Chechen Liao, Pui-Lai To, Chuang-Chun Liu

Abstract:

Online shopping has become very popular recently. In today’s highly competitive online retail environment, retaining existing customers is a necessity for online retailers. This study focuses on the antecedents and consequences of Internet buyer regret and satisfaction in the online consumer purchasing process. This study examines the roles that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) and alternative evaluation (i.e., alternative attractiveness) play in determining buyer regret and satisfaction in e-commerce. The study also examines the consequences of regret, satisfaction and habit in regard to repurchase intention. In addition, this study attempts to investigate the moderating role of habit in attaining a better understanding of the relationship between repurchase intention and its antecedents. Survey data collected from 431 online customers are analyzed using structural equation modeling (SEM) with partial least squares (PLS) and support provided for the hypothesized links. These results indicate that online consumer’s purchasing process evaluations (i.e., search experience difficulty, service-attribute evaluations, product-attribute evaluations and post-purchase price perceptions) have significant influences on regret and satisfaction, which in turn influences repurchase intention. In addition, alternative evaluation (i.e., alternative attractiveness) has a significant positive influence on regret. The research model can provide a richer understanding of online customers’ repurchase behavior and contribute to both research and practice.

Keywords: online shopping, purchase evaluation, regret, satisfaction

Procedia PDF Downloads 271
1079 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 156
1078 An Analysis of Different Essential Components of Flight Plan Operations at Low Altitude

Authors: Apisit Nawapanpong, Natthapat Boonjerm

Abstract:

This project aims to analyze and identify the flight plan of low-altitude aviation in Thailand and other countries. The development of UAV technology has led the innovation and revolution in the aviation industry; this includes the development of new modes of passenger or freight transportation, and it has also affected other industries widely. At present, this technology is being developed rapidly and has been tested all over the world to make the most efficient for technology or innovation, and it is likely to grow more extensively. However, no flight plan for low-altitude operation has been published by the government organization; when compared with high-altitude aviation with manned aircraft, various unique factors are different, whether mission, operation, altitude range or airspace restrictions. In the study of the essential components of low-altitude operation measures to be practical and tangible, there were major problems, so the main consideration of this project is to analyze the components of low-altitude operations which are conducted up to the altitudes of 400 ft or 120 meters above ground level referring to the terrain, for example, air traffic management, classification of aircraft, basic necessity and safety, and control area. This research will focus on confirming the theory through qualitative and quantitative research combined with theoretical modeling and regulatory framework and by gaining insights from various positions in aviation industries, including aviation experts, government officials, air traffic controllers, pilots, and airline operators to identify the critical essential components of low-altitude flight operation. This project analyzes by using computer programs for science and statistics research to prove that the result is equivalent to the theory and be beneficial for regulating the flight plan for low-altitude operation by different essential components from this project and can be further developed for future studies and research in aviation industries.

Keywords: low-altitude aviation, UAV technology, flight plan, air traffic management, safety measures

Procedia PDF Downloads 37
1077 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 347
1076 Fine Needle Aspiration Biopsy of Thyroid Nodules

Authors: Ilirian Laçi, Alketa Spahiu

Abstract:

Big strums of thyroid glandule observed by a simple viewing can be witnessed in everyday life. Medical cabinets evidence patients withpalpablenodes of thyroid glandule, mainly nodes of the size of 10 millimeters. Further, more cases which have resulted in negative under palpation have resulted in positive at ultrasound examination. Therefore, the use of ultrasound for diagnosing has increased the number of patients with nodes of thyroid glandule in the last couple of decades in all countries, Albania included. Thus, there has been evidence of an increased number of patients affected by this pathology, where female patients dominate. Demographically, the capital shows high numbers due to the high population, but of interest is the high incidence of those areas distanced from the sea. While regarding related pathologies, no significant link was evidenced, an element of ancestry was evident in the nodes of the thyroid glandule. When we talk of nodes of the thyroid glandule, we should consider hyperplasia, neoplasia, and inflammatory diseases that cause nodes of the thyroid glandule. This increase parallels the world’s increase of the incidence of thyroid glandule, with malign cases, which are at about 5% and are not depended on size. Given the numbers, with most thyroid glandule nodes being benign, the main objective of the examination of the nodes was the determination of benign and malign cases to avoid undue surgery. Subject of this study were 212 patients that underwent fine-needle aspiration (FNA) under ultrasound guidance at the Medical University Center of Tirana. All the patients came to the Mother Teresa University Hospital from public and private hospitals and other polyclinics. These patients had an ultrasound examination before visiting the Center of Nuclear Medicine for a scintigraph of thyroid glandule in the period September 2016 and September 2017. To correlate, all patients had been examined via ultrasound of the thyroid glandule prior to the scintigraph. The ultrasound included evaluation of the number of nodes, their size, their solid, cystic, or solid-cystic structure, echogenicity according to the gray scale, the presence of calcification, the presence of lymph nodes, the presence of adenopathy, and the correlation of the cytology results from the Laboratory of Pathological Anatomy of Medical University Center of Tirana.

Keywords: thyroid nodes, fine needle aspiration, ultrasound, scintigraphy

Procedia PDF Downloads 84
1075 Patient-Specific Design Optimization of Cardiovascular Grafts

Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw

Abstract:

Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.

Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering

Procedia PDF Downloads 227
1074 Presenting of 'Local Wishes Map' as a Tool for Promoting Dialogue and Developing Healthy Cities

Authors: Ana Maria G. Sperandio, Murilo U. Malek-Zadeh, João Luiz de S. Areas, Jussara C. Guarnieri

Abstract:

Intersectoral governance is a requirement for developing healthy cities. However, this achievement is difficult to be succeeded, especially in regions at low resources condition. Therefore, it was developed a cheap investigative procedure to diagnose sectoral wishes related to urban planning and health promotion. This procedure is composed of two phases, which can be applied to different groups in order to compare the results. The first phase is a conversation guided by a list of questions. Some of those questions aim to gather information about how individuals understand concepts such as healthy city or a health promotion and what they believe that constitutes the relation between urban planning and urban health. Other questions investigate local issues, and how citizens would like to promote dialogue between sectors. At second phase individuals stand around the investigated city (or city region) map and are asked to represent their wishes on it. They can represent it by writing text notations or inserting icons on it, with the latter representing a city element, for example, some trees, a square, a playground, a hospital, a cycle track. After groups had represented their wishes, the map can be photographed, and then the results from distinct groups can be compared. This procedure was conducted at a small city in Brazil (Holambra), in 2017 which is the first out of four years of the mayor’s term. The prefecture asked for this tool in order to make Holambra become a city of Potential Healthy Municipalities Network in Brazil. Two sectors were investigated: the government and the urban population. By the end of our investigation, the intersection from the group (i.e., population and government) maps was accounted for creating a map of common wishes. Therefore, the material produced can be used as a guide for promoting dialogue between sectors and as a tool of monitoring politics progress. The report of this procedure was directed to public managers, so they could see the common wishes between themselves and local populations, and use this tool as a guide for creating urban politics which intends to enhance health promotion and to develop a healthy city, even at low resources condition.

Keywords: governance, health promotion, intersectorality, urban planning

Procedia PDF Downloads 122
1073 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan

Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid

Abstract:

In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.

Keywords: Data quality, Null hypothesis, Seismic lines, Seismic reflection survey

Procedia PDF Downloads 140
1072 Malpractice, Even in Conditions of Compliance With the Rules of Dental Ethics

Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo

Abstract:

Despite the existence of different dental specialties, the dentist-patient relationship is unique, in the very fact that the treatment is performed by one doctor and the patient identifies the malpractice presented as part of that doctor's practice; this is in complete contrast to cases of medical treatments where the patient can be presented to a team of doctors, to treat a specific pathology. The rules of dental ethics are almost the same as the rules of medical ethics. The appearance of dental malpractice affects exactly this two-party relationship, created on the basis of professionalism, without deviations in this direction, between the dentist and the patient, but with very narrow individual boundaries, compared to cases of medical malpractice. Main text: Malpractice can have different reasons for its appearance, starting from professional negligence, but also from the lack of professional knowledge of the dentist who undertakes the dental treatment. It should always be seen in perspective that we are not talking about the individual - the dentist who goes to work with the intention of harming their patients. Malpractice can also be a consequence of the impossibility, for anatomical or physiological reasons of the tooth under dental treatment, to realize the predetermined dental treatment plan. On the other hand, the dentist himself is an individual who can be affected by health conditions, or have vices that affect the systemic health of the dentist as an individual, which in these conditions can cause malpractice. So, depending on the reason that led to the appearance of malpractice, the method of treatment from a legal point of view also varies, for the dentist who committed the malpractice, evaluating the latter if the malpractice came under the conditions of applying the rules of dental ethics. Conclusions: The deviation from the predetermined dental plan is the minimum sign of malpractice and the latter should not be definitively related only to cases of difficult dental treatments. The identification of the reason for the appearance of malpractice is the initial element, which makes the difference in the way of its treatment, from a legal point of view, and the involvement of the dentist in the assessment of the malpractice committed, must be based on the legislation in force, which must be said to have their specific changes in different states. Malpractice should be referred to, or included in the lectures or in the continuing education of professionals, because it serves as a method of obtaining professional experience in order not to repeat the same thing several times, by different professionals.

Keywords: dental ethics, malpractice, negligence, legal basis, continuing education, dental treatments

Procedia PDF Downloads 49
1071 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 62
1070 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 399
1069 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 111
1068 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches

Authors: Kakali Bhadra

Abstract:

Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.

Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy

Procedia PDF Downloads 216
1067 Examining the Structural Model of Mindfulness and Headache Intensity With the Mediation of Resilience and Perfectionism in Migraine Patients

Authors: Alireza Monzavi Chaleshtari, Mahnaz Aliakbari Dehkordi, Nazila Esmaeili, Ahmad Alipour, Amin Asadi Hieh

Abstract:

Headache disorders are one of the most common disorders of the nervous system and are associated with suffering, disability, and financial costs for patients. Mindfulness as a lifestyle, in line with human nature, has the ability to affect the emotional system, i.e. thoughts, body sensations, raw emotions and action impulses of people. The aim of this study was to test the fit of structural model of mindfulness and severity of headache mediated by resilience and perfectionism in patients with migraine. Methods: The statistical population of this study included all patients with migraine referred to neurologists in Tehran in the spring and summer of 1401. The inclusion criteria were diagnosis of migraine by a neurologist, not having mental disorders or other physical diseases, and having at least a diploma. According to the number of research variables, 180 people were selected by convenience sampling method, which online answered the Ahvaz perfectionism questionnaire (AMQ), Connor and Davidson resilience questionnaire (CD-RISC), Ahvaz migraine headache questionnaire (APS) and 5-factor mindfulness questionnaire ((MAAS). Data were analyzed using structural equation modeling and Amos software. Results: The results showed that the direct pathways of mindfulness were not significant for severe headache (P <0.05), but other direct pathways - mindfulness to resilience, mindfulness to perfectionism, resilience to severe headache and perfectionism to severe headache), Was significant (P <0.01). After modifying and removing the non-significant paths, the final model fitted. Mediating variables Resilience and perfectionism mediated all paths of predictor variables to the criterion. Conclusion: According to the findings of the present study, mindfulness in migraine patients reduces the severity of headache by promoting resilience and reducing perfectionism.

Keywords: migraine, headache severity, mindfulness, resilience, perfectionism

Procedia PDF Downloads 63
1066 Functional Connectivity Signatures of Polygenic Depression Risk in Youth

Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip

Abstract:

Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.

Keywords: genetics, functional connectivity, pre-adolescents, depression

Procedia PDF Downloads 38
1065 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China

Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu

Abstract:

Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.

Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment

Procedia PDF Downloads 77
1064 The Role of Social Capital and Dynamic Capabilities in a Circular Economy: Evidence from German Small and Medium-Sized Enterprises

Authors: Antonia Hoffmann, Andrea Stübner

Abstract:

Resource scarcity and rising material prices are forcing companies to rethink their business models. The conventional linear system of economic growth and rising social needs further exacerbates the problem of resource scarcity. Therefore, it is necessary to separate economic growth from resource consumption. This can be achieved through the circular economy (CE), which focuses on sustainable product life cycles. However, companies face challenges in implementing CE into their businesses. Small and medium-sized enterprises are particularly affected by these problems, as they have a limited resource base. Collaboration and social interaction between different actors can help to overcome these obstacles. Based on a self-generated sample of 1,023 German small and medium-sized enterprises, we use a questionnaire to investigate the influence of social capital and its three dimensions - structural, relational, and cognitive capital - on the implementation of CE and the mediating effect of dynamic capabilities in explaining these relationships. Using regression analyses and structural equation modeling, we find that social capital is positively associated with CE implementation and dynamic capabilities partially mediate this relationship. Interestingly, our findings suggest that not all social capital dimensions are equally important for CE implementation. We theoretically and empirically explore the network forms of social capital and extend the CE literature by suggesting that dynamic capabilities help organizations leverage social capital to drive the implementation of CE practices. The findings of this study allow us to suggest several implications for managers and institutions. From a practical perspective, our study contributes to building circular production and service capabilities in small and medium-sized enterprises. Various CE activities can transform products and services to contribute to a better and more responsible world.

Keywords: circular economy, dynamic capabilities, SMEs, social capital

Procedia PDF Downloads 69