Search results for: linear park
3471 A New Approach to Interval Matrices and Applications
Authors: Obaid Algahtani
Abstract:
An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.Keywords: interval analysis, interval matrices, state space model, Kalman Filter
Procedia PDF Downloads 4253470 Comparison of Serum Protein Fraction between Healthy and Diarrhea Calf by Electrophoretogram
Authors: Jinhee Kang, Kwangman Park, Ruhee Song, Suhee Kim, Do-Hyeon Yu, Kyoungseong Choi, Jinho Park
Abstract:
Statement of the Problem: Animal blood components maintain homeostasis when animals are healthy, and changes in chemical composition of the blood and body fluids can be observed if animals have a disease. In particular, newborn calves are susceptible to disease and therefore hematologic tests and serum chemistry tests could become an important guideline to the diagnosis and the treatment of diseases. Diarrhea in newborn calves is the most damaging to cattle ranch, whether dairy or cattle fattening, and is a large part of calf atrophy and death. However, since the study on calf electrophoresis was not carried out, a survey analysis was conducted on it. Methodology and Theoretical Orientation: The calves were divided into healthy calves and disease (diarrhea) calves, and calves were classified by 1-14d, 15-28d, and more than 28d, respectively. The fecal state was classified by solid (0-value), semi-solid (1-value), loose (2-value) and watery (3-value). In the solid (0-value) and semi-solid (1-value) feces valuable pathogen was not detected, but loose (2-value) and watery (3-value) feces were detected. Findings: ALB, α-1, α-2, α-SUM, β and γ (Gamma) were examined by electrophoresis analysis of healthy calves and diarrhea calves. Test results showed that there were age differences between healthy calves and diarrheic calves. When we look at the γ-globulin at 1-14 days of age, we can see that the average calf of healthy calves is 16.8% and the average of diarrheal calves is 7.7%, when we look at the figures for the α-2 at 1-14 days, we found that healthy calves average 5.2% and diarrheal calves 8.7% higher than healthy cows. On α-1, 15-28 days, and after 28 days, healthy calves average 10.4% and diarrheal calves average 7.5% diarrhea calves were 12.6% and 12.4% higher than healthy calves. In the α-SUM, the healthy calves were 21.6%, 16.8%, and 14.5%, respectively, after 1-14 days, 15-28 days and 28 days. diarrheal calves were 23.1%, 19.5%, and 19.8%. Conclusion and Significance: In this study, we examined the electrophoresis results of healthy calves and diseased (diarrhea) calves, gamma globulin at 1-14 days of age were lower than those of healthy calves (diarrhea), indicating that the calf was unable to consume colostrum from the mother when it was a new calf. α-1, α-2, α-SUM may be associated with an acute inflammatory response as a result of increased levels of calves with diarrhea (diarrhea). Further research is needed to investigate the effects of acute inflammatory responses on additional calf-forming proteins. Information on the results of the electrophoresis test will be provided where necessary according to the item.Keywords: alpha, electrophoretogram, serum protein, γ, gamma
Procedia PDF Downloads 1403469 Approximation of Convex Set by Compactly Semidefinite Representable Set
Authors: Anusuya Ghosh, Vishnu Narayanan
Abstract:
The approximation of convex set by semidefinite representable set plays an important role in semidefinite programming, especially in modern convex optimization. To optimize a linear function over a convex set is a hard problem. But optimizing the linear function over the semidefinite representable set which approximates the convex set is easy to solve as there exists numerous efficient algorithms to solve semidefinite programming problems. So, our approximation technique is significant in optimization. We develop a technique to approximate any closed convex set, say K by compactly semidefinite representable set. Further we prove that there exists a sequence of compactly semidefinite representable sets which give tighter approximation of the closed convex set, K gradually. We discuss about the convergence of the sequence of compactly semidefinite representable sets to closed convex set K. The recession cone of K and the recession cone of the compactly semidefinite representable set are equal. So, we say that the sequence of compactly semidefinite representable sets converge strongly to the closed convex set. Thus, this approximation technique is very useful development in semidefinite programming.Keywords: semidefinite programming, semidefinite representable set, compactly semidefinite representable set, approximation
Procedia PDF Downloads 3883468 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance
Authors: Soheila Sadeghi
Abstract:
Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.Keywords: contract variation impact, quantitative analysis, project performance, claims analysis
Procedia PDF Downloads 423467 Study and Simulation of a Dynamic System Using Digital Twin
Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli
Abstract:
Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models
Procedia PDF Downloads 1513466 Predicting Bridge Pier Scour Depth with SVM
Authors: Arun Goel
Abstract:
Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper, attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly and Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly and Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicates the improvement in the performance of SVM (Poly and Rbf) in comparison to dimensional form of scour.Keywords: modeling, pier scour, regression, prediction, SVM (Poly and Rbf kernels)
Procedia PDF Downloads 4523465 Investigation of Mechanical and Rheological Properties of Poly (trimethylene terephthalate) (PTT)/Polyethylene Blend Using Carboxylate and Ionomer as Compatibilizers
Authors: Wuttikorn Chayapanja, Sutep Charoenpongpool, Manit Nithitanakul, Brian P. Grady
Abstract:
Poly (trimethylene terephthalate) (PTT) is a linear aromatic polyester with good strength and stiffness, good surface appearance, low shrinkage and war page, and good dimensional stability. However, it has low impact strength which is a problem in automotive application. Thus, modification of PTT with the other polymer or polymer blending is a one way to develop a new material with excellence properties. In this study, PTT/High Density Polyethylene (HDPE) blends and PTT/Linear Low Density Polyethylene (LLDPE) blends with and without compatibilizers base on maleic anhydride grafted HDPE (MAH-g-HDPE) and ethylene-methacrylic acid neutralized sodium metal (Na-EMAA) were prepared by a twin-screw extruder. The blended samples with different ratios of polymers and compatibilizers were characterized on mechanical and rheological properties. Moreover, the phase morphology and dispersion size were studied by using SEM to give better understanding of the compatibility of the blends.Keywords: poly trimethylene terephthalate, polyethylene, compatibilizer, polymer blend
Procedia PDF Downloads 4163464 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 3973463 Comparative DNA Binding of Iron and Manganese Complexes by Spectroscopic and ITC Techniques and Antibacterial Activity
Authors: Maryam Nejat Dehkordi, Per Lincoln, Hassan Momtaz
Abstract:
Interaction of Schiff base complexes of iron and manganese (iron [N, N’ Bis (5-(triphenyl phosphonium methyl) salicylidene) -1, 2 ethanediamine) chloride, [Fe Salen]Cl, manganese [N, N’ Bis (5-(triphenyl phosphonium methyl) salicylidene) -1, 2 ethanediamine) acetate) with DNA were investigated by spectroscopic and isothermal titration calorimetry techniques (ITC). The absorbance spectra of complexes have shown hyper and hypochromism in the presence of DNA that is indication of interaction of complexes with DNA. The linear dichroism (LD) measurements confirmed the bending of DNA in the presence of complexes. Furthermore, isothermal titration calorimetry experiments approved that complexes bound to DNA on the base of both electrostatic and hydrophobic interactions. Furthermore, ITC profile exhibits the existence of two binding phases for the complex. Antibacterial activity of ligand and complexes were tested in vitro to evaluate their activity against the gram positive and negative bacteria.Keywords: Schiff base complexes, ct-DNA, linear dichroism (LD), isothermal titration calorimetry (ITC), antibacterial activity
Procedia PDF Downloads 4713462 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 1053461 Interactive Solutions for the Multi-Objective Capacitated Transportation Problem with Mixed Constraints under Fuzziness
Authors: Aquil Ahmed, Srikant Gupta, Irfan Ali
Abstract:
In this paper, we study a multi-objective capacitated transportation problem (MOCTP) with mixed constraints. This paper is comprised of the modelling and optimisation of an MOCTP in a fuzzy environment in which some goals are fractional and some are linear. In real life application of the fuzzy goal programming (FGP) problem with multiple objectives, it is difficult for the decision maker(s) to determine the goal value of each objective precisely as the goal values are imprecise or uncertain. Also, we developed the concept of linearization of fractional goal for solving the MOCTP. In this paper, imprecision of the parameter is handled by the concept of fuzzy set theory by considering these parameters as a trapezoidal fuzzy number. α-cut approach is used to get the crisp value of the parameters. Numerical examples are used to illustrate the method for solving MOCTP.Keywords: capacitated transportation problem, multi objective linear programming, multi-objective fractional programming, fuzzy goal programming, fuzzy sets, trapezoidal fuzzy number
Procedia PDF Downloads 4363460 The Magnetized Quantum Breathing in Cylindrical Dusty Plasma
Authors: A. Abdikian
Abstract:
A quantum breathing mode has been theatrically studied in quantum dusty plasma. By using linear quantum hydrodynamic model, not only the quantum dispersion relation of rotation mode but also void structure has been derived in the presence of an external magnetic field. Although the phase velocity of the magnetized quantum breathing mode is greater than that of unmagnetized quantum breathing mode, attenuation of the magnetized quantum breathing mode along radial distance seems to be slower than that of unmagnetized quantum breathing mode. Clearly, drawing the quantum breathing mode in the presence and absence of a magnetic field, we found that the magnetic field alters the distribution of dust particles and changes the radial and azimuthal velocities around the axis. Because the magnetic field rotates the dust particles and collects them, it could compensate the void structure.Keywords: the linear quantum hydrodynamic model, the magnetized quantum breathing mode, the quantum dispersion relation of rotation mode, void structure
Procedia PDF Downloads 2983459 Establishment of the Regression Uncertainty of the Critical Heat Flux Power Correlation for an Advanced Fuel Bundle
Authors: L. Q. Yuan, J. Yang, A. Siddiqui
Abstract:
A new regression uncertainty analysis methodology was applied to determine the uncertainties of the critical heat flux (CHF) power correlation for an advanced 43-element bundle design, which was developed by Canadian Nuclear Laboratories (CNL) to achieve improved economics, resource utilization and energy sustainability. The new methodology is considered more appropriate than the traditional methodology in the assessment of the experimental uncertainty associated with regressions. The methodology was first assessed using both the Monte Carlo Method (MCM) and the Taylor Series Method (TSM) for a simple linear regression model, and then extended successfully to a non-linear CHF power regression model (CHF power as a function of inlet temperature, outlet pressure and mass flow rate). The regression uncertainty assessed by MCM agrees well with that by TSM. An equation to evaluate the CHF power regression uncertainty was developed and expressed as a function of independent variables that determine the CHF power.Keywords: CHF experiment, CHF correlation, regression uncertainty, Monte Carlo Method, Taylor Series Method
Procedia PDF Downloads 4173458 Research on Energy Field Intervening in Lost Space Renewal Strategy
Authors: Tianyue Wan
Abstract:
Lost space is the space that has not been used for a long time and is in decline, proposed by Roger Trancik. And in his book Finding Lost Space: Theories of Urban Design, the concept of lost space is defined as those anti-traditional spaces that are unpleasant, need to be redesigned, and have no benefit to the environment and users. They have no defined boundaries and do not connect the various landscape elements in a coherent way. With the rapid development of urbanization in China, the blind areas of urban renewal have become a chaotic lost space that is incompatible with the rapid development of urbanization. Therefore, lost space needs to be reconstructed urgently under the background of infill development and reduction planning in China. The formation of lost space is also an invisible division of social hierarchy. This paper tries to break down the social class division and the estrangement between people through the regeneration of lost space. Ultimately, it will enhance vitality, rebuild a sense of belonging, and create a continuous open public space for local people. Based on the concept of lost space and energy field, this paper clarifies the significance of the energy field in the lost space renovation. Then it introduces the energy field into lost space by using the magnetic field in physics as a prototype. The construction of the energy field is support by space theory, spatial morphology analysis theory, public communication theory, urban diversity theory and city image theory. Taking Wuhan’s Lingjiao Park of China as an example, this paper chooses the lost space on the west side of the park as the research object. According to the current situation of this site, the energy intervention strategies are proposed from four aspects: natural ecology, space rights, intangible cultural heritage and infrastructure configuration. And six specific lost space renewal methods are used in this work, including “riveting”, “breakthrough”, “radiation”, “inheritance”, “connection” and “intersection”. After the renovation, space will be re-introduced into the active crow. The integration of activities and space creates a sense of place, improve the walking experience, restores the vitality of the space, and provides a reference for the reconstruction of lost space in the city.Keywords: dynamic vitality intervention, lost space, space vitality, sense of place
Procedia PDF Downloads 1133457 Development of Tensile Stress-Strain Relationship for High-Strength Steel Fiber Reinforced Concrete
Authors: H. A. Alguhi, W. A. Elsaigh
Abstract:
This paper provides a tensile stress-strain (σ-ε) relationship for High-Strength Steel Fiber Reinforced Concrete (HSFRC). Load-deflection (P-δ) behavior of HSFRC beams tested under four-point flexural load were used with inverse analysis to calculate the tensile σ-ε relationship for various tested concrete grades (70 and 90MPa) containing 60 kg/m3 (0.76 %) of hook-end steel fibers. A first estimate of the tensile (σ-ε) relationship is obtained using RILEM TC 162-TDF and other methods available in literature, frequently used for determining tensile σ-ε relationship of Normal-Strength Concrete (NSC) Non-Linear Finite Element Analysis (NLFEA) package ABAQUS® is used to model the beam’s P-δ behavior. The results have shown that an element-size dependent tensile σ-ε relationship for HSFRC can be successfully generated and adopted for further analyzes involving HSFRC structures.Keywords: tensile stress-strain, flexural response, high strength concrete, steel fibers, non-linear finite element analysis
Procedia PDF Downloads 3603456 Spectrum Assignment Algorithms in Optical Networks with Protection
Authors: Qusay Alghazali, Tibor Cinkler, Abdulhalim Fayad
Abstract:
In modern optical networks, the flex grid spectrum usage is most widespread, where higher bit rate streams get larger spectrum slices while lower bit rate traffic streams get smaller spectrum slices. To our practice, under the ITU-T recommendation, G.694.1, spectrum slices of 50, 75, and 100 GHz are being used with central frequency at 193.1 THz. However, when these spectrum slices are not sufficient, multiple spectrum slices can use either one next to another or anywhere in the optical wavelength. In this paper, we propose the analysis of the wavelength assignment problem. We compare different algorithms for this spectrum assignment with and without protection. As a reference for comparisons, we concluded that the Integer Linear Programming (ILP) provides the global optimum for all cases. The most scalable algorithm is the greedy one, which yields results in subsequent ranges even for more significant network instances. The algorithms’ benchmark implemented using the LEMON C++ optimization library and simulation runs based on a minimum number of spectrum slices assigned to lightpaths and their execution time.Keywords: spectrum assignment, integer linear programming, greedy algorithm, international telecommunication union, library for efficient modeling and optimization in networks
Procedia PDF Downloads 1703455 A Genetic Algorithm Based Permutation and Non-Permutation Scheduling Heuristics for Finite Capacity Material Requirement Planning Problem
Authors: Watchara Songserm, Teeradej Wuttipornpun
Abstract:
This paper presents a genetic algorithm based permutation and non-permutation scheduling heuristics (GAPNP) to solve a multi-stage finite capacity material requirement planning (FCMRP) problem in automotive assembly flow shop with unrelated parallel machines. In the algorithm, the sequences of orders are iteratively improved by the GA characteristics, whereas the required operations are scheduled based on the presented permutation and non-permutation heuristics. Finally, a linear programming is applied to minimize the total cost. The presented GAPNP algorithm is evaluated by using real datasets from automotive companies. The required parameters for GAPNP are intently tuned to obtain a common parameter setting for all case studies. The results show that GAPNP significantly outperforms the benchmark algorithm about 30% on average.Keywords: capacitated MRP, genetic algorithm, linear programming, automotive industries, flow shop, application in industry
Procedia PDF Downloads 4903454 Apricot Insurance Portfolio Risk
Authors: Kasirga Yildirak, Ismail Gur
Abstract:
We propose a model to measure hail risk of an Agricultural Insurance portfolio. Hail is one of the major catastrophic event that causes big amount of loss to an insurer. Moreover, it is very hard to predict due to its strange atmospheric characteristics. We make use of parcel based claims data on apricot damage collected by the Turkish Agricultural Insurance Pool (TARSIM). As our ultimate aim is to compute the loadings assigned to specific parcels, we build a portfolio risk model that makes use of PD and the severity of the exposures. PD is computed by Spherical-Linear and Circular –Linear regression models as the data carries coordinate information and seasonality. Severity is mapped into integer brackets so that Probability Generation Function could be employed. Individual regressions are run on each clusters estimated on different criteria. Loss distribution is constructed by Panjer Recursion technique. We also show that one risk-one crop model can easily be extended to the multi risk–multi crop model by assuming conditional independency.Keywords: hail insurance, spherical regression, circular regression, spherical clustering
Procedia PDF Downloads 2513453 Effect of R&D Human Capital Support for SMEs: An Analysis of Smes Support Program in South Korea
Authors: Misun Kim, Beomsoo Park
Abstract:
Korean government has strongly supported SMEs financially and technically. It has also changed R&D manpower management so that SMEs can benefit from the knowledge of highly qualified experts. This study evaluates the impacts of such policy on SMEs and analyzes the factors affecting the growth of the firms. Then we compare the characteristics of high growth companies to general companies. This factors could be use in the future for identifying firms that would significantly benefit from manpower help.Keywords: dispatch human Ccapital, high growth, science and technology policy, SMEs
Procedia PDF Downloads 3033452 Formation of Chemical Compound Layer at the Interface of Initial Substances A and B with Dominance of Diffusion of the A Atoms
Authors: Pavlo Selyshchev, Samuel Akintunde
Abstract:
A theoretical approach to consider formation of chemical compound layer at the interface between initial substances A and B due to the interfacial interaction and diffusion is developed. It is considered situation when speed of interfacial interaction is large enough and diffusion of A-atoms through AB-layer is much more then diffusion of B-atoms. Atoms from A-layer diffuse toward B-atoms and form AB-atoms on the surface of B-layer. B-atoms are assumed to be immobile. The growth kinetics of the AB-layer is described by two differential equations with non-linear coupling, producing a good fit to the experimental data. It is shown that growth of the thickness of the AB-layer determines by dependence of chemical reaction rate on reactants concentration. In special case the thickness of the AB-layer can grow linearly or parabolically depending on that which of processes (interaction or the diffusion) controls the growth. The thickness of AB-layer as function of time is obtained. The moment of time (transition point) at which the linear growth are changed by parabolic is found.Keywords: phase formation, binary systems, interfacial reaction, diffusion, compound layers, growth kinetics
Procedia PDF Downloads 5713451 Low-Cost Image Processing System for Evaluating Pavement Surface Distress
Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa
Abstract:
Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means
Procedia PDF Downloads 1823450 The Development of a Cyber Violence Measurement Tool for Youths: A Multi-Reporting of Ecological Factors
Authors: Jong-Hyo Park, Eunyoung Choi, Jae-Yeon Lim, Seon-Suk Lee, Yeong-Rong Koo, Ji-Ung Kwon, Kyung-Sung Kim, Jong-Ik Lee, Juhan Park, Hyun-Kyu Lee, Won-Kyoung Oh, Jisang Lee, Jiwon Choe
Abstract:
Due to COVID-19, cyber violence among youths has soared as they spend more time online than before. In contrast to the deepening concerns, measurement tools that can assess the vulnerability of cyber violence in individual youths still need to be supplemented. The measurement tools lack consideration of various factors related to cyber violence among youths. Most of the tools are self-report questionnaires, and these adolescents' self-report questionnaire forms can underestimate the harmful behavior and overestimate the damage experience. Therefore, this study aims to develop a multi-report measurement tool for youths that can reliably measure individuals' ecological factors related to cyber violence. The literature review explored factors related to cyber violence, and the questions were constructed. The face validity of the questions was confirmed by conducting focus group interviews. Exploratory and confirmatory factor analyses (N=671) were also conducted for statistical validation. This study developed a multi-report measurement tool for cyber violence with 161 questions, consisting of six domains: online behavior, cyber violence awareness, victimization-perpetration-witness experience, coping efficacy (individuals, peers, teachers, and parents), psychological characteristics, and pro-social capabilities. In addition to self-report from a youth respondent, this measurement tool includes peers, teachers, and parents reporting for the respondent. It is possible to reliably measure the ecological factors of individual youths who are vulnerable or highly resistant to cyber violence. In schools, teachers could refer to the measurement results for guiding students, better understanding their cyber violence conditions, and assessing their pro-social capabilities. With the measurement results, teachers and police officers could detect perpetrators or victims and intervene immediately. In addition, this measurement tool could analyze the effects of the prevention and intervention programs for cyber violence and draw appropriate suggestions.Keywords: adolescents, cyber violence, cyber violence measurement tool, measurement tool, multi-report measurement tool, youths
Procedia PDF Downloads 1013449 Corrosion Protection of Structural Steel by Surfactant Containing Reagents
Authors: D. Erdenechimeg, T. Bujinlkham, N. Erdenepurev
Abstract:
The anti-corrosion performance of fatty acid coated mild steel samples is studied. Samples of structural steel coated with collector reagents deposited from surfactant in ethanol solution and overcoated with an epoxy barrier paint. A quantitative corrosion rate was determined by linear polarization resistance method using biopotentiostat/galvanostat 400. Coating morphology was determined by scanning electronic microscopy. A test for hydrophobic surface of steel by surfactant was done. From the samples, the main component or high content iron was determined by chemical method and other metal contents were determined by Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES) method. Prior to measuring the corrosion rate, mechanical and chemical treatments were performed to prepare the test specimens. Overcoating the metal samples with epoxy barrier paint after exposing them with surfactant the corrosion rate can be inhibited by 34-35 µm/year.Keywords: corrosion, linear polarization resistance, coating, surfactant
Procedia PDF Downloads 993448 A Reconfigurable Microstrip Patch Antenna with Polyphase Filter for Polarization Diversity and Cross Polarization Filtering Operation
Authors: Lakhdar Zaid, Albane Sangiovanni
Abstract:
A reconfigurable microstrip patch antenna with polyphase filter for polarization diversity and cross polarization filtering operation is presented in this paper. In our approach, a polyphase filter is used to obtain the four 90° phase shift outputs to feed a square microstrip patch antenna. The antenna can be switched between four states of polarization in transmission as well as in receiving mode. Switches are interconnected with the polyphase filter network to produce left-hand circular polarization, right-hand circular polarization, horizontal linear polarization, and vertical linear polarization. Additional advantage of using polyphase filter is its filtering capability for cross polarization filtering in right-hand circular polarization and left-hand circular polarization operation. The theoretical and simulated results demonstrated that polyphase filter is a good candidate to drive microstrip patch antenna to accomplish polarization diversity and cross polarization filtering operation.Keywords: active antenna, polarization diversity, patch antenna, polyphase filter
Procedia PDF Downloads 4123447 Enhancing Understanding and Engagement in Linear Motion Using 7R-Based Module
Authors: Mary Joy C. Montenegro, Voltaire M. Mistades
Abstract:
This action research was implemented to enhance the teaching of linear motion and to improve students' conceptual understanding and engagement using a developed 7R-based module called 'module on vectors and one-dimensional kinematics' (MVOK). MVOK was validated in terms of objectives, contents, format, and language used, presentation, usefulness, and overall presentation. The validation process revealed a value of 4.7 interpreted as 'Very Acceptable' with a substantial agreement (0. 60) from the validators. One intact class of 46 Grade 12 STEM students from one of the public schools in Paranaque City served as the participants of this study. The students were taught using the module during the first semester of the academic year 2019–2020. Employing the mixed-method approach, quantitative data were gathered using pretest/posttest, activity sheets, problem sets, and survey form, while qualitative data were obtained from surveys, interviews, observations, and reflection log. After the implementation, there was a significant difference of 18.4 on students’ conceptual understanding as shown in their pre-test and post-test scores on the 24-item test with a moderate Hake gain equal to 0.45 and an effect size of 0.83. Moreover, the scores on activity and problem sets have a 'very good' to 'excellent' rating, which signifies an increase in the level of students’ conceptual understanding. There also exists a significant difference between the mean scores of students’ engagement overall (t= 4.79, p = 0.000, p < 0.05) and in the dimension of emotion (t = 2.51, p = 0.03) and participation/interaction (t = 5.75, p = 0.001). These findings were supported by gathered qualitative data. Positive views were elicited from the students since it is an accessible tool for learning and has well-detailed explanations and examples. The results of this study may substantiate that using MVOK will lead to better physics content understanding and higher engagement.Keywords: conceptual understanding, engagement, linear motion, module
Procedia PDF Downloads 1323446 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins
Authors: Navab Karimi, Tohid Alizadeh
Abstract:
An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.
Procedia PDF Downloads 733445 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor
Authors: J. Ritonja
Abstract:
The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification
Procedia PDF Downloads 1263444 Global Production of Systematic Reviews on Population Health Issues in the Middle East and North Africa: Preliminary Results of a Systematic Overview and Bibliometric Analysis, 2008-2016
Authors: Karima Chaabna, Sohaila Cheema, Amit Abraham, Hekmat Alrouh, Ravinder Mamtani, Javaid I. Sheikh
Abstract:
We aimed to assess the production of systematic reviews (SRs) that synthesize observational studies discussing population health issues in the Middle East and North Africa (MENA). Two independent reviewers systematically searched MEDLINE through PubMed. Between 2008-2016, 5,747 articles (reviews, systematic reviews, and meta-analyses) were identified. Following a multi-stage screening process, 387 SRs (with or without meta-analysis) on population health issues in the MENA were included in our overview. Citation numbers for each SR were retrieved from Google Scholar. Impact factor of the journal during the publication year for the included SRs was retrieved from the Institute of Scientific Information’s Journal Citation Report. We conducted linear regression analysis to assess time trends of number of publications according to SRs’ characteristics. We characterized a linear statistically significant increase in the annual numbers of SRs that summarize observational studies on the MENA population health (p-value<0.0001, R2=0.95), from 15 in 2008 to 81 in 2016. Our analysis reveals also linear statistically significant increases in numbers of SRs published by authors affiliated to institutions located inside MENA and/or neighboring countries (N=113, p-value < 0.0001, R²=0.90), by authors located outside MENA (N=155, p-value=0.0007, R²=0.82), and by collaborating authors affiliated to institutions located outside MENA and inside the region and/or in MENA’s neighboring countries (total number of SRs (N)= 119, p-value=0.0004, R²=0.85). Furthermore, these SRs were published in journals with an IF ranging from 0 to 47.8 (median=2.1). Linear statistically significant increases in numbers of published SRs were demonstrated in journals’ impact factor (IF) categories (IF=[0-2[: R²=0.79, p-value=0.0012; IF=[2-4[:R²=0.86, p-value=0.0003; and IF=[4-6[:R²=0.53, p-value=0.026). Additionally, annual numbers of citations to the SRs varied between 0 and 471 (median=7). While each year, a couple of SRs were getting more than 50 annual citations, there were linear statistically significant increases in numbers of published SRs with an annual number of citations at [0-10[(R²=0.89, p-value=0.00014) and at [10-50[ (R²=0.76, p-value=0.0021). Between 2008-2016, increasingly SRs that summarize observational studies on population health issues in the MENA were published. Authors of these SRs were located inside and/or outside the MENA region and an increasing number of collaborations were seen. Increasing numbers of SRs were predominantly observed in journals with an IF between zero and six. Interestingly, SRs covering MENA region countries were being increasingly cited, indicating an escalation of interest in this region’s population health issues.Keywords: bibliometric, citation, impact factor, Middle East and North Africa, population health, systematic review
Procedia PDF Downloads 1553443 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis
Authors: R. Periyasamy, Deepak Joshi, Sneh Anand
Abstract:
Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis
Procedia PDF Downloads 5003442 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing
Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares
Abstract:
In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms
Procedia PDF Downloads 190