Search results for: method of multiple scales
23145 Fish Scales as a Nonlethal Screening Tools for Assessing the Effects of Surface Water Contaminants in Cyprinus Carpio
Authors: Shahid Mahboob, Hafiz Muhammad Ashraf, Salma Sultana, Tayyaba Sultana, Khalid Al-Ghanim, Fahid Al-Misned, Zubair Ahmedd
Abstract:
There is an increasing need for an effective tool to estimate the risks derived from the large number of pollutants released to the environment by human activities. Typical screening procedures are highly invasive or lethal to the fish. Recent studies show that fish scales biochemically respond to a range of contaminants, including toxic metals, organic compounds, and endocrine disruptors. The present study evaluated the effects of the surface water contaminants on Cyprinus carpio in the Ravi River by comparing DNA extracted non-lethally from their scales to DNA extracted from the scales of fish collected from a controlled fish farm. A single, random sampling was conducted. Fish were broadly categorised into three weight categories (W1, W2 and W3). The experimental samples in the W1, W2 and W3 categories had an average DNA concentration (µg/µl) that was lower than the control samples. All control samples had a single DNA band; whereas the experimental samples in W1 fish had 1 to 2 bands, the experimental samples in W2 fish had two bands and the experimental samples in W3 fish had fragmentation in the form of three bands. These bands exhibit the effects of pollution on fish in the Ravi River. On the basis findings of this study, we propose that fish scales can be successfully employed as a new non-lethal tool for the evaluation of the effect of surface water contaminants.Keywords: fish scales, Cyprinus carpio, heavy metals, non-invasive, DNA fragmentation
Procedia PDF Downloads 41423144 Inference for Synthetic Control Methods with Multiple Treated Units
Authors: Ziyan Zhang
Abstract:
Although the Synthetic Control Method (SCM) is now widely applied, its most commonly- used inference method, placebo test, is often problematic, especially when the treatment is not uniquely assigned. This paper discusses the problems with the placebo test under the multivariate treatment case. And, to improve the power of inferences, I further propose an Andrews-type procedure as it potentially solves some drawbacks of the placebo test. Simulations are conducted to show the Andrews’ test is often valid and powerful, compared with the placebo test.Keywords: Synthetic Control Method, Multiple treatments, Andrews' test, placebo test
Procedia PDF Downloads 16423143 A Multigranular Linguistic ARAS Model in Group Decision Making
Authors: Wiem Daoud Ben Amor, Luis Martínez López, Hela Moalla Frikha
Abstract:
Most of the multi-criteria group decision making (MCGDM) problems dealing with qualitative criteria require consideration of the large background of expert information. It is common that experts have different degrees of knowledge for giving their alternative assessments according to criteria. So, it seems logical that they use different evaluation scales to express their judgment, i.e., multi granular linguistic scales. In this context, we propose the extension of the classical additive ratio assessment (ARAS) method to the case of a hierarchical linguistics term for managing multi granular linguistic scales in uncertain contexts where uncertainty is modeled by means in linguistic information. The proposed approach is called the extended hierarchical linguistics-ARAS method (ARAS-ELH). Within the ARAS-ELH approach, the DM can diagnose the results (the ranking of the alternatives) in a decomposed style, i.e., not only at one level of the hierarchy but also at the intermediate ones. Also, the developed approach allows a feedback transformation i.e the collective final results of all experts able to be transformed at any level of the extended linguistic hierarchy that each expert has previously used. Therefore, the ARAS-ELH technique makes it easier for decision-makers to understand the results. Finally, An MCGDM case study is given to illustrate the proposed approach.Keywords: additive ratio assessment, extended hierarchical linguistic, multi-criteria group decision making problems, multi granular linguistic contexts
Procedia PDF Downloads 20623142 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 9523141 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry
Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.Keywords: learning, education, imagining, pedagogy, dialogical teaching
Procedia PDF Downloads 9223140 Sustainable Technologies for Decommissioning of Nuclear Facilities
Authors: Ahmed Stifi, Sascha Gentes
Abstract:
The German nuclear industry, while implementing the German policy, believes that the journey towards the green-field, namely phasing out of nuclear energy, should be achieved through green techniques. The most important techniques required for the wide range of decommissioning activities are decontamination techniques, cutting techniques, radioactivity measuring techniques, remote control techniques, techniques for worker and environmental protection and techniques for treating, preconditioning and conditioning nuclear waste. Many decontamination techniques are used for removing contamination from metal, concrete or other surfaces like the scales inside pipes. As the pipeline system is one of the important components of nuclear power plants, the process of decontamination in tubing is of more significance. The development of energy sectors like oil sector, gas sector and nuclear sector, since the middle of 20th century, increased the pipeline industry and the research in the decontamination of tubing in each sector is found to serve each other. The extraction of natural products and material through the pipeline can result in scale formation. These scales can be radioactively contaminated through an accumulation process especially in the petrochemical industry when oil and gas are extracted from the underground reservoir. The radioactivity measured in these scales can be significantly high and pose a great threat to people and the environment. At present, the decontamination process involves using high pressure water jets with or without abrasive material and this technology produces a high amount of secondary waste. In order to overcome it, the research team within Karlsruhe Institute of Technology developed a new sustainable method to carry out the decontamination of tubing without producing any secondary waste. This method is based on vibration technique which removes scales and also does not require any auxiliary materials. The outcome of the research project proves that the vibration technique used for decontamination of tubing is environmental friendly in other words a sustainable technique.Keywords: sustainable technologies, decontamination, pipeline, nuclear industry
Procedia PDF Downloads 30323139 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 12423138 Fault Tolerant Control System Using a Multiple Time Scale SMC Technique and a Geometric Approach
Authors: Ghodbane Azeddine, Saad Maarouf, Boland Jean-Francois, Thibeault Claude
Abstract:
This paper proposes a new design of an active fault-tolerant flight control system against abrupt actuator faults. This overall system combines a multiple time scale sliding mode controller for fault compensation and a geometric approach for fault detection and diagnosis. The proposed control system is able to accommodate several kinds of partial and total actuator failures, by using available healthy redundancy actuators. The overall system first estimates the correct fault information using the geometric approach. Then, and based on that, a new reconfigurable control law is designed based on the multiple time scale sliding mode technique for on-line compensating the effect of such faults. This approach takes advantages of the fact that there are significant difference between the time scales of aircraft states that have a slow dynamics and those that have a fast dynamics. The closed-loop stability of the overall system is proved using Lyapunov technique. A case study of the non-linear model of the F16 fighter, subject to the rudder total loss of control confirms the effectiveness of the proposed approach.Keywords: actuator faults, fault detection and diagnosis, fault tolerant flight control, sliding mode control, multiple time scale approximation, geometric approach for fault reconstruction, lyapunov stability
Procedia PDF Downloads 37023137 Variable Selection in a Data Envelopment Analysis Model by Multiple Proportions Comparison
Authors: Jirawan Jitthavech, Vichit Lorchirachoonkul
Abstract:
A statistical procedure using multiple comparisons test for proportions is proposed for variable selection in a data envelopment analysis (DEA) model. The test statistic in the multiple comparisons is the proportion of efficient decision making units (DMUs) in a DEA model. Three methods of multiple comparisons test for proportions: multiple Z tests with Bonferroni correction, multiple tests in 2Xc crosstabulation and the Marascuilo procedure, are used in the proposed statistical procedure of iteratively eliminating the variables in a backward manner. Two simulation populations of moderately and lowly correlated variables are used to compare the results of the statistical procedure using three methods of multiple comparisons test for proportions with the hypothesis testing of the efficiency contribution measure. From the simulation results, it can be concluded that the proposed statistical procedure using multiple Z tests for proportions with Bonferroni correction clearly outperforms the proposed statistical procedure using the remaining two methods of multiple comparisons and the hypothesis testing of the efficiency contribution measure.Keywords: Bonferroni correction, efficient DMUs, Marascuilo procedure, Pastor et al. method, 2xc crosstabulation
Procedia PDF Downloads 31023136 Analysis of Scientific Attitude, Computer Anxiety, Educational Internet Use, Problematic Internet Use, and Academic Achievement of Middle School Students According to Demographic Variables
Authors: Mehmet Bekmezci, Ismail Celik, Ismail Sahin, Ahmet Kiray, A. Oguz Akturk
Abstract:
In this research, students’ scientific attitude, computer anxiety, educational use of the Internet, academic achievement, and problematic use of the Internet are analyzed based on different variables (gender, parents’ educational level and daily access to the Internet). The research group involves 361 students from two middle schools which are located in the center of Konya. The “general survey method” is adopted in the research. In accordance with the purpose of the study, percentage, mean, standard deviation, independent samples t--‐test, ANOVA (variance) are employed in the study. A total of four scales are implemented. These four scales include a total of 13 sub-dimensions. The scores from these scales and their subscales are studied in terms of various variables. In the research, students’ scientific attitude, computer anxiety, educational use of the Internet, the problematic Internet use and academic achievement (gender, parent educational level, and daily access to the Internet) are investigated based on various variables and some significant relations are found.Keywords: scientific attitude, educational use of the internet, computer anxiety, problematic use of the internet, academic achievement
Procedia PDF Downloads 36623135 Formation of Mg-Silicate Scales and Inhibition of Their Scale Formation at Injection Wells in Geothermal Power Plant
Authors: Samuel Abebe Ebebo
Abstract:
Scale precipitation causes a major issue for geothermal power plants because it reduces the production rate of geothermal energy. Each geothermal power plant's different chemical and physical conditions can cause the scale to precipitate under a particular set of fluid-rock interactions. Depending on the mineral, it is possible to have scale in the production well, steam separators, heat exchangers, reinjection wells, and everywhere in between. The scale consists mainly of smectite and trace amounts of chlorite, magnetite, quartz, hematite, dolomite, aragonite, and amorphous silica. The smectite scale is one of the difficult scales at injection wells in geothermal power plants. X-ray diffraction and chemical composition identify this smectite as Stevensite. The characteristics and the scale of each injection well line are different depending on the fluid chemistry. The smectite scale has been widely distributed in pipelines and surface plants. Mineral water equilibrium showed that the main factors controlling the saturation indices of smectite increased pH and dissolved Mg concentration due to the precipitate on the equipment surface. This study aims to characterize the scales and geothermal fluids collected from the Onuma geothermal power plant in Akita Prefecture, Japan. Field tests were conducted on October 30–November 3, 2021, at Onuma to determine the pH control methods for preventing magnesium silicate scaling, and as exemplified, the formation of magnesium silicate hydrates (M-S-H) with MgO to SiO2 ratios of 1.0 and pH values of 10 for one day has been studied at 25 °C. As a result, M-S-H scale formation could be suppressed, and stevensite formation could also be suppressed when we can decrease the pH of the fluid by less than 8.1, 7.4, and 8 (at 97 °C) in the fluid from O-3Rb and O-6Rb, O-10Rg, and O-12R, respectively. In this context, the scales and fluids collected from injection wells at a geothermal power plant in Japan were analyzed and characterized to understand the formation conditions of Mg-silicate scales with on-site synthesis experiments. From the results of the characterizations and on-site synthesis experiments, the inhibition method of their scale formation is discussed based on geochemical modeling in this study.Keywords: magnesium silicate, scaling, inhibitor, geothermal power plant
Procedia PDF Downloads 6423134 Development of a Rating Scale for Elementary EFL Writing
Authors: Mohammed S. Assiri
Abstract:
In EFL programs, rating scales used in writing assessment are often constructed by intuition. Intuition-based scales tend to provide inaccurate and divisive ratings of learners’ writing performance. Hence, following an empirical approach, this study attempted to develop a rating scale for elementary-level writing at an EFL program in Saudi Arabia. Towards this goal, 98 students’ essays were scored and then coded using comprehensive taxonomy of writing constructs and their measures. An automatic linear modeling was run to find out which measures would best predict essay scores. A nonparametric ANOVA, the Kruskal-Wallis test, was then used to determine which measures could best differentiate among scoring levels. Findings indicated that there were certain measures that could serve as either good predictors of essay scores or differentiators among scoring levels, or both. The main conclusion was that a rating scale can be empirically developed using predictive and discriminative statistical tests.Keywords: analytic scoring, rating scales, writing assessment, writing constructs, writing performance
Procedia PDF Downloads 46323133 An Investigation the Effectiveness of Emotion Regulation Training on the Reduction of Cognitive-Emotion Regulation Problem in Patients with Multiple Sclerosis
Authors: Mahboobeh Sadeghi, Zahra Izadi Khah, Mansour Hakim Javadi, Masoud Gholamali Lavasani
Abstract:
Background: Since there is a relation between psychological and physiological factors, the aim of this study was to examine the effect of Emotion Regulation training on cognitive emotion regulation problem in patients with Multiple Sclerosis(MS) Method: In a randomized clinical trial thirty patients diagnosed with Multiple Sclerosis referred to state welfare organization were selected. The sample group was randomized into either an experimental group or a nonintervention control group. The subjects participated in 75-minute treatment sessions held three times a week for 4weeks (12 sessions). All 30 individuals were administered with Cognitive Emotion Regulation questionnaire (CERQ). Participants completed the questionnaire in pretest and post-test. Data obtained from the questionnaire was analyzed using Mancova. Results: Emotion Regulation significantly decreased the Cognitive Emotion Regulation problems patients with Multiple sclerosis (p < 0.001). Conclusions: Emotion Regulation can be used for the treatment of cognitive-emotion regulation problem in Multiple sclerosis.Keywords: Multiple Sclerosis, cognitive-emotion regulation, emotion regulation, MS
Procedia PDF Downloads 45923132 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study
Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim
Abstract:
In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.Keywords: deproteinization, pilot scale, scale, sardine pilchardus
Procedia PDF Downloads 44623131 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects
Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed
Abstract:
Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis
Procedia PDF Downloads 37623130 State Rescaling of the Urban Development in Hong Kong after the Reunification: A Case Study of the Planning Process of the Hong Kong Section of the Guangzhou-Shenzhen-Hong Kong Express Rail Link
Authors: Zhihua Xu
Abstract:
In the era of globalization, the urban question is increasingly being posed in the form of a scale question. Scale theory provides a new perspective for analyzing various dynamics and their influences on urban development. After the reunification, how the government of the Hong Kong Special Administrative Region (SAR) interacts with the actors at various scales and carries out state rescaling are the keys to exploring the issue of urban development and governance in Hong Kong. This paper examines the entire planning process of the Hong Kong Section of the Guangzhou-Shenzhen-Hong Kong Express Rail Link, from project conception, design, to consultation, and fund application, to identify the actors at different scales involved in the process, and analyze the modes and consequences of their interaction. This study shows that after the reunification, the Hong Kong SAR Government takes the initiative to scale up to expand its hinterland. Intergovernmental institutional cooperation is an important mode of state rescaling for the Hong Kong SAR government. Meanwhile, the gradually growing civil society plays an important role in the rescaling of urban development. Local actors use scalar politics to construct discourses and take actions at multiple scales to challenge the government’s proposal and trigger a discussion on the project throughout the Hong Kong society. The case study of Hong Kong can deepen the understanding of state rescaling of territorial organizations in the context of institutional transformation and enrich the theoretical connotation of state rescaling. It also helps the Mainland government to better understand the case of Hong Kong and formulate appropriate.Keywords: Hong Kong, state rescaling, scalar politics, Hong Kong section of the Guangzhou-Shenzhen-Hong Kong express rail link, urban governance
Procedia PDF Downloads 21923129 Optimization of Extraction Conditions and Characteristics of Scale collagen From Sardine: Sardina pilchardus
Authors: F. Bellali, M. Kharroubi, M. Loutfi, N.Bourhim
Abstract:
In Morocco, fish processing industry is an important source income for a large amount of byproducts including skins, bones, heads, guts and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Scales from Sardina plichardus resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic and bio medical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. Moreover, the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The basic principle of RSM is to determinate model equations that describe interrelations between the independent variables and the dependent variables.Keywords: Sardina pilchardus, scales, valorization, collagen extraction, response surface methodology
Procedia PDF Downloads 41623128 Vendor Selection and Supply Quotas Determination by Using Revised Weighting Method and Multi-Objective Programming Methods
Authors: Tunjo Perič, Marin Fatović
Abstract:
In this paper a new methodology for vendor selection and supply quotas determination (VSSQD) is proposed. The problem of VSSQD is solved by the model that combines revised weighting method for determining the objective function coefficients, and a multiple objective linear programming (MOLP) method based on the cooperative game theory for VSSQD. The criteria used for VSSQD are: (1) purchase costs and (2) product quality supplied by individual vendors. The proposed methodology is tested on the example of flour purchase for a bakery with two decision makers.Keywords: cooperative game theory, multiple objective linear programming, revised weighting method, vendor selection
Procedia PDF Downloads 35823127 Multi-Criteria Decision Approach to Performance Measurement Techniques Data Envelopment Analysis: Case Study of Kerman City’s Parks
Authors: Ali A. Abdollahi
Abstract:
During the last several decades, scientists have consistently applied Multiple Criteria Decision-Making methods in making decisions about multi-faceted, complicated subjects. While making such decisions and in order to achieve more accurate evaluations, they have regularly used a variety of criteria instead of applying just one Optimum Evaluation Criterion. The method presented here utilizes both ‘quantity’ and ‘quality’ to assess the function of the Multiple-Criteria method. Applying Data envelopment analysis (DEA), weighted aggregated sum product assessment (WASPAS), Weighted Sum Approach (WSA), Analytic Network Process (ANP), and Charnes, Cooper, Rhodes (CCR) methods, we have analyzed thirteen parks in Kerman city. It further indicates that the functions of WASPAS and WSA are compatible with each other, but also that their deviation from DEA is extensive. Finally, the results for the CCR technique do not match the results of the DEA technique. Our study indicates that the ANP method, with the average rate of 1/51, ranks closest to the DEA method, which has an average rate of 1/49.Keywords: multiple criteria decision making, Data envelopment analysis (DEA), Charnes Cooper Rhodes (CCR), Weighted Sum Approach (WSA)
Procedia PDF Downloads 21723126 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 46823125 Nonstationary Increments and Casualty in the Aluminum Market
Authors: Andrew Clark
Abstract:
McCauley, Bassler, and Gunaratne show that integration I(d) processes as used in economics and finance do not necessarily produce stationary increments, which are required to determine causality in both the short term and the long term. This paper follows their lead and shows I(d) aluminum cash and futures log prices at daily and weekly intervals do not have stationary increments, which means prior causality studies using I(d) processes need to be re-examined. Wavelets based on undifferenced cash and futures log prices do have stationary increments and are used along with transfer entropy (versus cointegration) to measure causality. Wavelets exhibit causality at most daily time scales out to 1 year, and weekly time scales out to 1 year and more. To determine stationarity, localized stationary wavelets are used. LSWs have the benefit, versus other means of testing for stationarity, of using multiple hypothesis tests to determine stationarity. As informational flows exist between cash and futures at daily and weekly intervals, the aluminum market is efficient. Therefore, hedges used by producers and consumers of aluminum need not have a big concern in terms of the underestimation of hedge ratios. Questions about arbitrage given efficiency are addressed in the paper.Keywords: transfer entropy, nonstationary increments, wavelets, localized stationary wavelets, localized stationary wavelets
Procedia PDF Downloads 20223124 Coherent Ku-Band Radar for Monitoring Ocean Waves
Authors: Richard Mitchell, Robert Mitchell, Thai Duong, Kyungbin Bae, Daegon Kim, Youngsub Lee, Inho Kim, Inho Park, Hyungseok Lee
Abstract:
Although X-band radar is commonly used to measure the properties of ocean waves, the use of a higher frequency has several advantages, such as increased backscatter coefficient, better Doppler sensitivity, lower power, and a smaller package. A low-power Ku-band radar system was developed to demonstrate these advantages. It is fully coherent, and it interleaves short and long pulses to achieve a transmit duty ratio of 25%, which makes the best use of solid-state amplifiers. The range scales are 2 km, 4 km, and 8 km. The minimum range is 100 m, 200 m, and 400 m for the three range scales, and the range resolution is 4 m, 8 m, and 16 m for the three range scales. Measurements of the significant wave height, wavelength, wave period, and wave direction have been made using traditional 3D-FFT methods. Radar and ultrasonic sensor results collected over an extended period of time at a coastal site in South Korea are presented.Keywords: measurement of ocean wave parameters, Ku-band radar, coherent radar, compact radar
Procedia PDF Downloads 16923123 Analytic Hierarchy Process
Authors: Hadia Rafi
Abstract:
To make any decision in any work/task/project it involves many factors that needed to be looked. The analytic Hierarchy process (AHP) is based on the judgments of experts to derive the required results this technique measures the intangibles and then by the help of judgment and software analysis the comparisons are made which shows how much a certain element/unit leads another. AHP includes how an inconsistent judgment should be made consistent and how the judgment should be improved when possible. The Priority scales are obtained by multiplying them with the priority of their parent node and after that they are added.Keywords: AHP, priority scales, parent node, software analysis
Procedia PDF Downloads 40623122 A Golay Pair Based Synchronization Algorithm for Distributed Multiple-Input Multiple-Output System
Authors: Weizhi Zhong, Xiaoyi Lu, Lei Xu
Abstract:
In order to solve the problem of inaccurate synchronization for distributed multiple-input multiple-output (MIMO) system in multipath environment, a golay pair aided timing synchronization method is proposed in this paper. A new synchronous training sequence based on golay pair is designed. By utilizing the aperiodic auto-correlation complementary property of the new training sequence, the fine timing point is obtained at the receiver. Simulation results show that, compared with the tradition timing synchronization approaches, the proposed algorithm can provide high accuracy in synchronization, especially under multipath condition.Keywords: distributed MIMO system, golay pair, multipath, synchronization
Procedia PDF Downloads 24723121 Vibration Imaging Method for Vibrating Objects with Translation
Authors: Kohei Shimasaki, Tomoaki Okamura, Idaku Ishii
Abstract:
We propose a vibration imaging method for high frame rate (HFR)-video-based localization of vibrating objects with large translations. When the ratio of the translation speed of a target to its vibration frequency is large, obtaining its frequency response in image intensities becomes difficult because one or no waves are observable at the same pixel. Our method can precisely localize moving objects with vibration by virtually translating multiple image sequences for pixel-level short-time Fourier transform to observe multiple waves at the same pixel. The effectiveness of the proposed method is demonstrated by analyzing several HFR videos of flying insects in real scenarios.Keywords: HFR video analysis, pixel-level vibration source localization, short-time Fourier transform, virtual translation
Procedia PDF Downloads 10823120 Dynamic Stability of Axially Moving Viscoelastic Plates under Nonuniform in-Plane Edge Excitations
Authors: T. H. Young, S. J. Huang, Y. S. Chiu
Abstract:
This paper investigates the parametric stability of an axially moving web subjected to nonuniform in-plane edge excitations on two opposite, simply-supported edges. The web is modeled as a viscoelastic plate whose constitutive relation obeys the Kelvin-Voigt model, and the in-plane edge excitations are expressed as the sum of a static tension and a periodical perturbation. Due to the in-plane edge excitations, the moving plate may bring about parametric instability under certain situations. First, the in-plane stresses of the plate due to the nonuniform edge excitations are determined by solving the in-plane forced vibration problem. Then, the dependence on the spatial coordinates in the equation of transverse motion is eliminated by the generalized Galerkin method, which results in a set of discretized system equations in time. Finally, the method of multiple scales is utilized to solve the set of system equations analytically if the periodical perturbation of the in-plane edge excitations is much smaller as compared with the static tension of the plate, from which the stability boundaries of the moving plate are obtained. Numerical results reveal that only combination resonances of the summed-type appear under the in-plane edge excitations considered in this work.Keywords: axially moving viscoelastic plate, in-plane periodic excitation, nonuniformly distributed edge tension, dynamic stability
Procedia PDF Downloads 32223119 A Comparative Study of Cognitive Functions in Relapsing-Remitting Multiple Sclerosis Patients, Secondary-Progressive Multiple Sclerosis Patients and Normal People
Authors: Alireza Pirkhaefi
Abstract:
Background: Multiple sclerosis (MS) is one of the most common diseases of the central nervous system (brain and spinal cord). Given the importance of cognitive disorders in patients with multiple sclerosis, the present study was in order to compare cognitive functions (Working memory, Attention and Centralization, and Visual-spatial perception) in patients with relapsing- remitting multiple sclerosis (RRMS) and secondary progressive multiple sclerosis (SPMS). Method: Present study was performed as a retrospective study. This research was conducted with Ex-Post Facto method. The samples of research consisted of 60 patients with multiple sclerosis (30 patients relapsing-retrograde and 30 patients secondary progressive), who were selected from Tehran Community of MS Patients Supported as convenience sampling. 30 normal persons were also selected as a comparison group. Montreal Cognitive Assessment (MOCA) was used to assess cognitive functions. Data were analyzed using multivariate analysis of variance. Results: The results showed that there were significant differences among cognitive functioning in patients with RRMS, SPMS, and normal individuals. There were not significant differences in working memory between two groups of patients with RRMS and SPMS; while significant differences in these variables were seen between the two groups and normal individuals. Also, results showed significant differences in attention and centralization and visual-spatial perception among three groups. Conclusions: Results showed that there are differences between cognitive functions of RRMS and SPMS patients so that the functions of RRMS patients are better than SPMS patients. These results have a critical role in improvement of cognitive functions; reduce the factors causing disability due to cognitive impairment, and especially overall health of society.Keywords: multiple sclerosis, cognitive function, secondary-progressive, normal subjects
Procedia PDF Downloads 23923118 Correlation between Potential Intelligence Explanatory Study in the Perspective of Multiple Intelligence Theory by Using Dermatoglyphics and Culture Approaches
Authors: Efnie Indrianie
Abstract:
Potential Intelligence constitutes one essential factor in every individual. This intelligence can be a provision for the development of Performance Intelligence if it is supported by surrounding environment. Fingerprint analysis is a method in recognizing this Potential Intelligence. This method is grounded on pattern and number of finger print outlines that are assumed symmetrical with the number of nerves in our brain, in which these areas have their own function among another. These brain’s functions are later being transposed into intelligence components in accordance with the Multiple Intelligences theory. This research tested the correlation between Potential Intelligence and the components of its Performance Intelligence. Statistical test results that used Pearson correlation showed that five components of Potential Intelligence correlated with Performance Intelligence. Those five components are Logic-Math, Logic, Linguistic, Music, Kinesthetic, and Intrapersonal. Also, this research indicated that cultural factor had a big role in shaping intelligence.Keywords: potential intelligence, performance intelligence, multiple intelligences, fingerprint, environment, brain
Procedia PDF Downloads 53523117 A Variant of Newton's Method with Free Second-Order Derivative
Authors: Young Hee Geum
Abstract:
In this paper, we present the iterative method and determine the control parameters to converge cubically for solving nonlinear equations. In addition, we derive the asymptotic error constant.Keywords: asymptotic error constant, iterative method, multiple root, root-finding, order of convergent
Procedia PDF Downloads 29323116 Digital Geomatics Trends for Production and Updating Topographic Map by Using Digital Generalization Procedures
Authors: O. Z. Jasim
Abstract:
An accuracy digital map must satisfy the users for two main requirements, first, map must be visually readable and second, all the map elements must be in a good representation. These two requirements hold especially true for map generalization which aims at simplifying the representation of cartographic data. Different scales of maps are very important for any decision in any maps with different scales such as master plan and all the infrastructures maps in civil engineering. Cartographer cannot project the data onto a piece of paper, but he has to worry about its readability. The map layout of any geodatabase is very important, this layout is help to read, analyze or extract information from the map. There are many principles and guidelines of generalization that can be find in the cartographic literature. A manual reduction method for generalization depends on experience of map maker and therefore produces incompatible results. Digital generalization, rooted from conventional cartography, has become an increasing concern in both Geographic Information System (GIS) and mapping fields. This project is intended to review the state of the art of the new technology and help to understand the needs and plans for the implementation of digital generalization capability as well as increase the knowledge of production topographic maps.Keywords: cartography, digital generalization, mapping, GIS
Procedia PDF Downloads 304