Search results for: divergent elliptic operator
150 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques
Authors: Jonathan Iworiso
Abstract:
Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains
Procedia PDF Downloads 108149 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 146148 Cross-Cultural Collaboration Shaping Co-Creation Methodology to Enhance Disaster Risk Management Approaches
Authors: Jeannette Anniés, Panagiotis Michalis, Chrysoula Papathanasiou, Selby Knudsen
Abstract:
RiskPACC project aims to bring together researchers, practitioners, and first responders from nine European countries following a co-creation approach aiming to develop customised solutions to meet the needs of end-users. The co-creation workshops target to enhance the communication pathways between local civil protection authorities (CPAs) and citizens, in an effort to close the risk perception-action gap (RPAG). The participants in the workshops include a variety of stakeholders, as well as citizens, fostering the dialogue between the groups and supporting citizen participation in disaster risk management (DRM). The co-creation methodology in place implements co-design elements due to the integration of four ICT tools. Such ICT tools include web-based and mobile application technical solutions in different development stages, ranging from formulation and validation of concepts to pilot demonstrations. In total, seven different case studies are foreseen in RiskPACC. The workflow of the workshops is designed to be adaptive to every of the seven case study countries and their cultures’ particular needs. This work aims to provide an overview of the the preparation and the conduction of the workshops in which researchers and practitioners focused on mapping these different needs from the end users. The latter included first responders but also volunteers and citizens who actively participated in the co-creation workshops. The strategies to improve communication between CPAs and citizens themselves differ in the countries, and the modules of the co-creation methodology are adapted in response to such differences. Moreover, the project partners experienced how the structure of such workshops is perceived differently in the seven case studies. Therefore, the co-creation methodology itself is a design method underlying several iterations, which are eventually shaped by cross-cultural collaboration. For example, some case studies applied other modules according to the participatory group recruited. The participants were technical experts, teachers, citizens, first responders, or volunteers, among others. This work aspires to present the divergent approaches of the seven case studies implementing the co-creation methodology proposed, in response to different perceptions of the modules. An analysis of the adaptations and implications will also be provided to assess where the case studies’ objective of improving disaster resilience has been obtained.Keywords: citizen participation, co-creation, disaster resilience, risk perception, ICT tools
Procedia PDF Downloads 90147 Establishment of a Nomogram Prediction Model for Postpartum Hemorrhage during Vaginal Delivery
Authors: Yinglisong, Jingge Chen, Jingxuan Chen, Yan Wang, Hui Huang, Jing Zhnag, Qianqian Zhang, Zhenzhen Zhang, Ji Zhang
Abstract:
Purpose: The study aims to establish a nomogram prediction model for postpartum hemorrhage (PPH) in vaginal delivery. Patients and Methods: Clinical data were retrospectively collected from vaginal delivery patients admitted to a hospital in Zhengzhou, China, from June 1, 2022 - October 31, 2022. Univariate and multivariate logistic regression were used to filter out independent risk factors. A nomogram model was established for PPH in vaginal delivery based on the risk factors coefficient. Bootstrapping was used for internal validation. To assess discrimination and calibration, receiver operator characteristics (ROC) and calibration curves were generated in the derivation and validation groups. Results: A total of 1340 cases of vaginal delivery were enrolled, with 81 (6.04%) having PPH. Logistic regression indicated that history of uterine surgery, induction of labor, duration of first labor, neonatal weight, WBC value (during the first stage of labor), and cervical lacerations were all independent risk factors of hemorrhage (P <0.05). The area-under-curve (AUC) of ROC curves of the derivation group and the validation group were 0.817 and 0.821, respectively, indicating good discrimination. Two calibration curves showed that nomogram prediction and practical results were highly consistent (P = 0.105, P = 0.113). Conclusion: The developed individualized risk prediction nomogram model can assist midwives in recognizing and diagnosing high-risk groups of PPH and initiating early warning to reduce PPH incidence.Keywords: vaginal delivery, postpartum hemorrhage, risk factor, nomogram
Procedia PDF Downloads 79146 Experimental Research of High Pressure Jet Interaction with Supersonic Crossflow
Authors: Bartosz Olszanski, Zbigniew Nosal, Jacek Rokicki
Abstract:
An experimental study of cold-jet (nitrogen) reaction control jet system has been carried out to investigate the flow control efficiency for low to moderate jet pressure ratios (total jet pressure p0jet over free stream static pressure in the wind tunnel p∞) and different angles of attack for infinite Mach number equal to 2. An investigation of jet influence was conducted on a flat plate geometry placed in the test section of intermittent supersonic wind tunnel of Department of Aerodynamics, WUT. Various convergent jet nozzle geometries to obtain different jet momentum ratios were tested on the same test model geometry. Surface static pressure measurements, Schlieren flow visualizations (using continuous and photoflash light source), load cell measurements gave insight into the supersonic crossflow interaction for different jet pressure and jet momentum ratios and their influence on the efficiency of side jet control as described by the amplification factor (actual to theoretical net force generated by the control nozzle). Moreover, the quasi-steady numerical simulations of flow through the same wind tunnel geometry (convergent-divergent nozzle plus test section) were performed using ANSYS Fluent basing on Reynolds-Averaged Navier-Stokes (RANS) solver incorporated with k-ω Shear Stress Transport (SST) turbulence model to assess the possible spurious influence of test section walls over the jet exit near field area of interest. The strong bow shock, barrel shock, and Mach disk as well as lambda separation region in front of nozzle were observed as images taken by high-speed camera examine the interaction of the jet and the free stream. In addition, the development of large-scale vortex structures (counter-rotating vortex pair) was detected. The history of complex static pressure pattern on the plate was recorded and compared to the force measurement data as well as numerical simulation data. The analysis of the obtained results, especially in the wake of the jet showed important features of the interaction mechanisms between the lateral jet and the flow field.Keywords: flow visualization techniques, pressure measurements, reaction control jet, supersonic cross flow
Procedia PDF Downloads 300145 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants
Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey
Abstract:
The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model
Procedia PDF Downloads 144144 Robotic Assisted vs Traditional Laparoscopic Partial Nephrectomy Peri-Operative Outcomes: A Comparative Single Surgeon Study
Authors: Gerard Bray, Derek Mao, Arya Bahadori, Sachinka Ranasinghe
Abstract:
The EAU currently recommends partial nephrectomy as the preferred management for localised cT1 renal tumours, irrespective of surgical approach. With the advent of robotic assisted partial nephrectomy, there is growing evidence that warm ischaemia time may be reduced compared to the traditional laparoscopic approach. There is still no clear differences between the two approaches with regards to other peri-operative and oncological outcomes. Current limitations in the field denote the lack of single surgeon series to compare the two approaches as other studies often include multiple operators of different experience levels. To the best of our knowledge, this study is the first single surgeon series comparing peri-operative outcomes of robotic assisted and laparoscopic PN. The current study aims to reduce intra-operator bias while maintaining an adequate sample size to assess the differences in outcomes between the two approaches. We retrospectively compared patient demographics, peri-operative outcomes, and renal function derangements of all partial nephrectomies undertaken by a single surgeon with experience in both laparoscopic and robotic surgery. Warm ischaemia time, length of stay, and acute renal function deterioration were all significantly reduced with robotic partial nephrectomy, compared to laparoscopic nephrectomy. This study highlights the benefits of robotic partial nephrectomy. Further prospective studies with larger sample sizes would be valuable additions to the current literature.Keywords: partial nephrectomy, robotic assisted partial nephrectomy, warm ischaemia time, peri-operative outcomes
Procedia PDF Downloads 141143 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions
Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri
Abstract:
Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers
Procedia PDF Downloads 133142 Clinician's Perspective of Common Factors of Change in Family Therapy: A Cross-National Exploration
Authors: Hassan Karimi, Fred Piercy, Ruoxi Chen, Ana L. Jaramillo-Sierra, Wei-Ning Chang, Manjushree Palit, Catherine Martosudarmo, Angelito Antonio
Abstract:
Background: The two psychotherapy camps, the randomized clinical trials (RCTs) and the common factors model, have competitively claimed specific explanations for therapy effectiveness. Recently, scholars called for empirical evidence to show the role of common factors in therapeutic outcome in marriage and family therapy. Purpose: This cross-national study aims to explore how clinicians, across different nations and theoretical orientations, attribute the contribution of common factors to therapy outcome. Method: A brief common factors questionnaire (CFQ-with a Cronbach’s Alpha, 0.77) was developed and administered in seven nations. A series of statistical analyses (paired-samples t-test, independent sample t-test, ANOVA) were conducted: to compare clinicians perceived contribution of total common factors versus model-specific factors, to compare each pair of common factors’ categories, and to compare clinicians from collectivistic nations versus clinicians from individualistic nation. Results: Clinicians across seven nations attributed 86% to common factors versus 14% to model-specific factors. Clinicians attributed 34% of therapeutic change to client’s factors, 26% to therapist’s factors, 26% to relationship factors, and 14% to model-specific techniques. The ANOVA test indicated each of the three categories of common factors (client 34%, therapist 26%, relationship 26%) showed higher contribution in therapeutic outcome than the category of model specific factors (techniques 14%). Clinicians with psychology degree attributed more contribution to model-specific factors than clinicians with MFT and counseling degrees who attributed more contribution to client factors. Clinicians from collectivistic nations attributed larger contributions to therapist’s factors (M=28.96, SD=12.75) than the US clinicians (M=23.22, SD=7.73). The US clinicians attributed a larger contribution to client’s factors (M=39.02, SD=1504) than clinicians from the collectivistic nations (M=28.71, SD=15.74). Conclusion: The findings indicate clinicians across the globe attributed more than two thirds of therapeutic change to CFs, which emphasize the training of the common factors model in the field. CFs, like model-specific factors, vary in their contribution to therapy outcome in relation to specific client, therapist, problem, treatment model, and sociocultural context. Sociocultural expectations and norms should be considered as a context in which both CFs and model-specific factors function toward therapeutic goals. Clinicians need to foster a cultural competency specifically regarding the divergent ways that CFs can be activated due to specific sociocultural values.Keywords: common factors, model-specific factors, cross-national survey, therapist cultural competency, enhancing therapist efficacy
Procedia PDF Downloads 288141 Intelligent Control of Bioprocesses: A Software Application
Authors: Mihai Caramihai, Dan Vasilescu
Abstract:
The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.Keywords: intelligent, control, fuzzy model, bioprocess optimization
Procedia PDF Downloads 327140 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics
Authors: Nurudeen Oluwasola Lasisi
Abstract:
Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis
Procedia PDF Downloads 45139 Potential Impacts of Maternal Nutrition and Selection for Residual Feed Intake on Metabolism and Fertility Parameters in Angus Bulls
Authors: Aidin Foroutan, David S. Wishart, Leluo L. Guan, Carolyn Fitzsimmons
Abstract:
Maximizing efficiency and growth potential of beef cattle requires not only genetic selection (i.e. residual feed intake (RFI)) but also adequate nutrition throughout all stages of growth and development. Nutrient restriction during gestation has been shown to negatively affect post-natal growth and development as well as fertility of the offspring. This, when combined with RFI may affect progeny traits. This study aims to investigate the impact of selection for divergent genetic potential for RFI and maternal nutrition during early- to mid-gestation, on bull calf traits such as fertility and muscle development using multiple ‘omics’ approaches. Comparisons were made between High-diet vs. Low-diet and between High-RFI vs. Low-RFI animals. An epigenetics experiment on semen samples identified 891 biomarkers associated with growth and development. A gene expression study on Longissimus thoracis muscle, semimembranosus muscle, liver, and testis identified 4 genes associated with muscle development and immunity of which Myocyte enhancer factor 2A [MEF2A; induces myogenesis and control muscle differentiation] was the only differentially expressed gene identified in all four tissues. An initial metabolomics experiment on serum samples using nuclear magnetic resonance (NMR) identified 4 metabolite biomarkers related to energy and protein metabolism. Once all the biomarkers are identified, bioinformatics approaches will be used to create a database covering all the ‘omics’ data collected from this project. This database will be broadened by adding other information obtained from relevant literature reviews. Association analyses with these data sets will be performed to reveal key biological pathways affected by RFI and maternal nutrition. Through these association studies between the genome and metabolome, it is expected that candidate biomarker genes and metabolites for feed efficiency, fertility, and/or muscle development are identified. If these gene/metabolite biomarkers are validated in a larger animal population, they could potentially be used in breeding programs to select superior animals. It is also expected that this work will lead to the development of an online tool that could be used to predict future traits of interest in an animal given its measurable ‘omics’ traits.Keywords: biomarker, maternal nutrition, omics, residual feed intake
Procedia PDF Downloads 192138 Information Literacy among Faculty and Students of Medical Colleges of Haryana, Punjab and Chandigarh
Authors: Sanjeev Sharma, Suman Lata
Abstract:
With the availability of diverse printed, electronic literature and web sites on medical and health related information, it is impossible for the medical professional to get the information he seeks in the shortest possible time. For all these problems information literacy is the only solution. Thus, information literacy is recognized as an important aspect of medical education. In the present study, an attempt has been made to know the information literacy skills of the faculty and students at medical colleges of Haryana, Punjab and Chandigarh. The scope of the study was confined to the 12 selected medical colleges of three States (Haryana, Punjab, and Chandigarh). The findings of the study were based on the data collected through 1018 questionnaires filled by the respondents of the medical colleges. It was found that Online Medical Websites (such as WebMD, eMedicine and Mayo Clinic etc.) were frequently used by 63.43% of the respondents of Chandigarh which is slightly more than Haryana (61%) and Punjab (55.65%). As well, 30.86% of the respondents of Chandigarh, 27.41% of Haryana and 27.05% of Punjab were familiar with the controlled vocabulary tool; 25.14% respondents of Chandigarh, 23.80% of Punjab, 23.17% of Haryana were familiar with the Boolean operators; 33.05% of the respondents of Punjab, 28.19% of Haryana and 25.14% of Chandigarh were familiar with the use and importance of the keywords while searching an electronic database; and 51.43% of the respondents of Chandigarh, 44.52% of Punjab and 36.29% of Haryana were able to make effective use of the retrieved information. For accessing information in electronic format, 47.74% of the respondents rated their skills high, while the majority of respondents (76.13%) were unfamiliar with the basic search technique i.e. Boolean operator used for searching information in an online database. On the basis of the findings, it was suggested that a comprehensive training program based on medical professionals information needs should be organized frequently. Furthermore, it was also suggested that information literacy may be included as a subject in the health science curriculum so as to make the medical professionals information literate and independent lifelong learners.Keywords: information, information literacy, medical professionals, medical colleges
Procedia PDF Downloads 157137 Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
Authors: Jorge Gonzalez-Camus
Abstract:
In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.Keywords: fractional evolution equations, Volterra integral equations, minimal and maximal mild solutions, neutral type equations, non-local in time equations
Procedia PDF Downloads 178136 Coaching for Lecturers at German Universities: An Inventory Based on a Qualitative Interview Study
Authors: Freya Willicks
Abstract:
The society of the 21st century is characterized by dynamic and complexity, developments that also shape universities and university life. The Bologna reform, for example, has led to restructuring at many European universities. Today's university teachers, therefore, have to meet many expectations: Their tasks include not only teaching but also the general improvement of the quality of teaching, good research, the management of various projects or the development of their own personal skills. This requires a high degree of flexibility and openness to change. The resulting pressure can often lead to exhaustion. Coaching can be a way for university teachers to cope with these pressures because it gives them the opportunity to discuss stressful situations with a coach and self-reflect on them. As a result, more and more universities in Europe offer to coach to their teachers. An analysis of the services provided at universities in Germany, however, quickly reveals an immense disagreement with regard to the understanding of ‘coaching’. A variety of terms is used, such as coaching, counselling or supervision. In addition, each university defines its offer individually, from process-oriented consulting to expert consulting, from group training to individual coaching. The biographic backgrounds of those who coach are also very divergent, both external and internal coaches can be suitable. These findings lead to the following questions: Which structural characteristics for coaching at universities have been proven successful? What competencies should a good coach for university lecturers have? In order to answer these questions, a qualitative study was carried out. In a first step, qualitative semi-structured interviews (N = 14) were conducted, on the one hand with coaches for university teachers and on the other hand with university teachers who have been coached. In a second step, the interviews were transcribed and analyzed using Mayring's qualitative content analysis. The study shows how great the potential of coaching can be for university teachers, who otherwise have little opportunity to talk about their teaching in a private setting. According to the study, the coach should neither be a colleague nor a superior of the coachee but should take an independent perspective, as this is the only way for the coachee to openly reflect on himself/herself. In addition, the coach should be familiar with the university system, i.e., be an academic himself/herself. Otherwise, he/she cannot fully understand the complexity of the teaching situation and the role expectations. However, internal coaches do not necessarily have much coaching experience or explicit coaching competencies. They often come from the university's own didactics department, are experts in didactics, but do not necessarily have a certified coaching education. Therefore, it is important to develop structures and guidelines for internal coaches to support their coaching. In further analysis, such guidelines will be developed on the basis of these interviews.Keywords: coaching, university coaching, university didactics, qualitative interviews
Procedia PDF Downloads 112135 Mapping Actors in Sao Paulo's Urban Development Policies: Interests at Stake in the Challenge to Sustainability
Authors: A. G. Back
Abstract:
In the context of global climate change, extreme weather events are increasingly intense and frequent, challenging the adaptability of urban space. In this sense, urban planning is a relevant instrument for addressing, in a systemic manner, various sectoral policies capable of linking the urban agenda to the reduction of social and environmental risks. The Master Plan of the Municipality of Sao Paulo, 2014, presents innovations capable of promoting the transition to sustainability in the urban space. Among such innovations, the following stand out: i) promotion of density in the axes of mass transport involving mixture of commercial, residential, services, and leisure uses (principles related to the compact city); ii) vulnerabilities reduction based on housing policies, including regular sources of funds for social housing and land reservation in urbanized areas; iii) reserve of green areas in the city to create parks and environmental regulations for new buildings focused on reducing the effects of heat island and improving urban drainage. However, long-term implementation involves distributive conflicts and may change in different political, economic, and social contexts over time. Thus, the central objective of this paper is to identify which factors limit or support the implementation of these policies. That is, to map the challenges and interests of converging and/or divergent urban actors in the sustainable urban development agenda and what resources they mobilize to support or limit these actions in the city of Sao Paulo. Recent proposals to amend the urban zoning law undermine the implementation of the Master Plan guidelines. In this context, three interest groups with different views of the city come into dispute: the real estate market, upper middle class neighborhood associations ('not in my backyard' movements), and social housing rights movements. This paper surveys the different interests and visions of these groups taking into account their convergences, or not, with the principles of sustainable urban development. This approach seeks to fill a gap in the international literature on the causes that underpin or hinder the continued implementation of policies aimed at the transition to urban sustainability in the medium and long term.Keywords: adaptation, ecosystem-based adaptation, interest groups, urban planning, urban transition to sustainability
Procedia PDF Downloads 122134 Iris Recognition Based on the Low Order Norms of Gradient Components
Authors: Iman A. Saad, Loay E. George
Abstract:
Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric
Procedia PDF Downloads 336133 Kýklos Dimensional Geometry: Entity Specific Core Measurement System
Authors: Steven D. P Moore
Abstract:
A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).Keywords: Kyklos, geometry, measurement, celestial, dimension
Procedia PDF Downloads 166132 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 85131 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 111130 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 135129 Innovative Technologies for Aeration and Feeding of Fish in Aquaculture with Minimal Impact on the Environment
Authors: Vasile Caunii, Andreea D. Serban, Mihaela Ivancia
Abstract:
The paper presents a new approach in terms of the circular economy of technologies for feeding and aeration of accumulations and water basins for fish farming and aquaculture. Because fish is and will be one of the main foods on the planet, the use of bio-eco-technologies is a priority for all producers. The technologies proposed in the paper want to reduce by a substantial percentage the costs of operation of ponds and water accumulation, using non-polluting technologies with minimal impact on the environment. The paper proposes two innovative, intelligent systems, fully automated that use a common platform, completely eco-friendly. One system is intended to aerate the water of the fish pond, and the second is intended to feed the fish by dispersing an optimal amount of fodder, depending on population size, age and habits. Both systems use a floating platform, regenerative energy sources, are equipped with intelligent and innovative systems, and in addition to fully automated operation, significantly reduce the costs of aerating water accumulations (natural or artificial) and feeding fish. The intelligent system used for feeding, in addition, to reduce operating costs, optimizes the amount of food, thus preventing water pollution and the development of bacteria, microorganisms. The advantages of the systems are: increasing the yield of fish production, these are green installations, with zero pollutant emissions, can be arranged anywhere on the water surface, depending on the user's needs, can operate autonomously or remotely controlled, if there is a component failure, the system provides the operator with accurate data on the issue, significantly reducing maintenance costs, transmit data about the water physical and chemical parameters.Keywords: bio-eco-technologies, economy, environment, fish
Procedia PDF Downloads 151128 Survey Research Assessment for Renewable Energy Integration into the Mining Industry
Authors: Kateryna Zharan, Jan C. Bongaerts
Abstract:
Mining operations are energy intensive, and the share of energy costs in total costs is often quoted in the range of 40 %. Saving on energy costs is, therefore, a key element of any mine operator. With the improving reliability and security of renewable energy (RE) sources, and requirements to reduce carbon dioxide emissions, perspectives for using RE in mining operations emerge. These aspects are stimulating the mining companies to search for ways to substitute fossil energy with RE. Hereby, the main purpose of this study is to present the survey research assessment in matter of finding out the key issues related to the integration of RE into mining activities, based on the mining and renewable energy experts’ opinion. The purpose of the paper is to present the outcomes of a survey conducted among mining and renewable energy experts about the feasibility of RE in mining operations. The survey research has been developed taking into consideration the following categories: first of all, the mining and renewable energy experts were chosen based on the specific criteria. Secondly, they were offered a questionnaire to gather their knowledge and opinions on incentives for mining operators to turn to RE, barriers and challenges to be expected, environmental effects, appropriate business models and the overall impact of RE on mining operations. The outcomes of the survey allow for the identification of factors which favor and disfavor decision-making on the use of RE in mining operations. It concludes with a set of recommendations for further study. One of them relates to a deeper analysis of benefits for mining operators when using RE, and another one suggests that appropriate business models considering economic and environmental issues need to be studied and developed. The results of the paper will be used for developing a hybrid optimized model which might be adopted at mines according to their operation processes as well as economic and environmental perspectives.Keywords: carbon dioxide emissions, mining industry, photovoltaic, renewable energy, survey research, wind generation
Procedia PDF Downloads 358127 Quantifying the Aspect of ‘Imagining’ in the Map of Dialogical inquiry
Authors: Chua Si Wen Alicia, Marcus Goh Tian Xi, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
In a world full of rapid changes, people often need a set of skills to help them navigate an ever-changing workscape. These skills, often known as “future-oriented skills,” include learning to learn, critical thinking, understanding multiple perspectives, and knowledge creation. Future-oriented skills are typically assumed to be domain-general, applicable to multiple domains, and can be cultivated through a learning approach called Dialogical Inquiry. Dialogical Inquiry is known for its benefits of making sense of multiple perspectives, encouraging critical thinking, and developing learner’s capability to learn. However, it currently exists as a quantitative tool, which makes it hard to track and compare learning processes over time. With these concerns, the present research aimed to develop and validate a quantitative tool for the Map of Dialogical Inquiry, focusing Imagining aspect of learning. The Imagining aspect four dimensions: 1) speculative/ look for alternatives, 2) risk taking/ break rules, 3) create/ design, and 4) vision/ imagine. To do so, an exploratory literature review was conducted to better understand the dimensions of Imagining. This included deep-diving into the history of the creation of the Map of Dialogical Inquiry and a review on how “Imagining” has been conceptually defined in the field of social psychology, education, and beyond. Then, we synthesised and validated scales. These scales measured the dimension of Imagination and related concepts like creativity, divergent thinking regulatory focus, and instrumental risk. Thereafter, items were adapted from the aforementioned procured scales to form items that would contribute to the preliminary version of the Imagining Scale. For scale validation, 250 participants were recruited. A Confirmatory Factor Analysis (CFA) sought to establish dimensionality of the Imagining Scale with an iterative procedure in item removal. Reliability and validity of the scale’s dimensions were sought through measurements of Cronbach’s alpha, convergent validity, and discriminant validity. While CFA found that the distinction of Imagining’s four dimensions could not be validated, the scale was able to establish high reliability with a Cronbach alpha of .96. In addition, the convergent validity of the Imagining scale was established. A lack of strong discriminant validity may point to overlaps with other components of the Dialogical Map as a measure of learning. Thus, a holistic approach to forming the tool – encompassing all eight different components may be preferable.Keywords: learning, education, imagining, pedagogy, dialogical teaching
Procedia PDF Downloads 93126 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller
Authors: P. Valsalal, S. Thangalakshmi
Abstract:
There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.Keywords: available line transfer capability, congestion management, FACTS device, Hybrid Fish-Bee Algorithm, ISO, UPFC
Procedia PDF Downloads 384125 Hybrid Model of Strategic and Contextual Leadership in Pluralistic Organizations- A Qualitative Multiple Case Study
Authors: Ergham Al Bachir
Abstract:
This study adopts strategic leadership (Upper Echelons) as the core theory and contextual leadership theory as the research lens. This research asks how the external context impacts strategic leadership effectiveness to achieve the outcomes in pluralistic organizations (PO). The study explores how the context influences the selection of CEOs, top management teams (TMT), and their leadership effectiveness. POs are characterized by the multiple objectives of their top management teams, divergent objectives, multiple strategies, and multiple governing authorities. The research question is explored by means of a qualitative multiple-case study focusing on healthcare, real estate, and financial services organizations. The data sources are semi-structured interviews, documents, and direct observations. The data analysis strategy is inductive and deploys thematic analysis and cross-case synthesis. The findings differentiate between national and international CEOs' delegation of authority and relationship with the Board of Directors. The findings identify the elements of the dynamic context that influence TMT and PO outcomes. The emergent hybrid strategic and contextual leadership framework shows how the different contextual factors influence strategic direction, PO context, selection of CEOs and TMT, and the outcomes in four pluralistic organizations. The study offers seven theoretical contributions to Upper Echelons, strategic leadership, and contextual leadership research. (1) The integration of two theories revealed how CEO’s impact on the organization is complementary to the contextual impact. (2) Conducting this study in the Middle East contributes to strategic leadership and contextual leadership research. (3) The demonstration of the significant contextual effects on the selection of CEOs. (4 and 5) Two contributions revealed new links between the context, the Board role, internal versus external CEOs, and national versus international CEOs. (6 and 7) This study offered two definitions: what accounts for CEO leadership effectiveness and organizational outcomes. Two methodological contributions were also identified: (1) Previous strategic leadership and Upper Echelons research are mainly quantitative, while this study adopts qualitative multiple-case research with face-to-face interviews. (2) The extrication of the CEO from the TMT advanced the data analysis in strategic leadership research. Four contributions are offered to practice: (1) The CEO's leadership effectiveness inside and outside the organization. (2) Rapid turnover of predecessor CEOs signifies the need for a strategic and contextual approach to CEOs' succession. (3) TMT composition and education impact on TMT-CEO and TMT-TMT interface. (4) Multilevel strategic contextual leadership development framework.Keywords: strategic leadership, contextual leadership, upper echelons, pluralistic organizations, cross-cultural leadership
Procedia PDF Downloads 95124 Developing an Automated Protocol for the Wristband Extraction Process Using Opentrons
Authors: Tei Kim, Brooklynn McNeil, Kathryn Dunn, Douglas I. Walker
Abstract:
To better characterize the relationship between complex chemical exposures and disease, our laboratory uses an approach that combines low-cost, polydimethylsiloxane (silicone) wristband samplers that absorb many of the chemicals we are exposed to with untargeted high-resolution mass spectrometry (HRMS) to characterize 1000’s of chemicals at a time. In studies with human populations, these wristbands can provide an important measure of our environment: however, there is a need to use this approach in large cohorts to study exposures associated with the disease. To facilitate the use of silicone samplers in large scale population studies, the goal of this research project was to establish automated sample preparation methods that improve throughput, robustness, and scalability of analytical methods for silicone wristbands. Using the Opentron OT2 automated liquid platform, which provides a low-cost and opensource framework for automated pipetting, we created two separate workflows that translate the manual wristband preparation method to a fully automated protocol that requires minor intervention by the operator. These protocols include a sequence generation step, which defines the location of all plates and labware according to user-specified settings, and a transfer protocol that includes all necessary instrument parameters and instructions for automated solvent extraction of wristband samplers. These protocols were written in Python and uploaded to GitHub for use by others in the research community. Results from this project show it is possible to establish automated and open source methods for the preparation of silicone wristband samplers to support profiling of many environmental exposures. Ongoing studies include deployment in longitudinal cohort studies to investigate the relationship between personal chemical exposure and disease.Keywords: bioinformatics, automation, opentrons, research
Procedia PDF Downloads 117123 Application of Regularized Spatio-Temporal Models to the Analysis of Remote Sensing Data
Authors: Salihah Alghamdi, Surajit Ray
Abstract:
Space-time data can be observed over irregularly shaped manifolds, which might have complex boundaries or interior gaps. Most of the existing methods do not consider the shape of the data, and as a result, it is difficult to model irregularly shaped data accommodating the complex domain. We used a method that can deal with space-time data that are distributed over non-planner shaped regions. The method is based on partial differential equations and finite element analysis. The model can be estimated using a penalized least squares approach with a regularization term that controls the over-fitting. The model is regularized using two roughness penalties, which consider the spatial and temporal regularities separately. The integrated square of the second derivative of the basis function is used as temporal penalty. While the spatial penalty consists of the integrated square of Laplace operator, which is integrated exclusively over the domain of interest that is determined using finite element technique. In this paper, we applied a spatio-temporal regression model with partial differential equations regularization (ST-PDE) approach to analyze a remote sensing data measuring the greenness of vegetation, measure by an index called enhanced vegetation index (EVI). The EVI data consist of measurements that take values between -1 and 1 reflecting the level of greenness of some region over a period of time. We applied (ST-PDE) approach to irregular shaped region of the EVI data. The approach efficiently accommodates the irregular shaped regions taking into account the complex boundaries rather than smoothing across the boundaries. Furthermore, the approach succeeds in capturing the temporal variation in the data.Keywords: irregularly shaped domain, partial differential equations, finite element analysis, complex boundray
Procedia PDF Downloads 142122 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error
Authors: Seyedamir Makinejadsanij
Abstract:
One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem
Procedia PDF Downloads 91121 Design of an Automatic Bovine Feeding Machine
Authors: Huseyin A. Yavasoglu, Yusuf Ziya Tengiz, Ali Göksenli
Abstract:
In this study, an automatic feeding machine for different type and class of bovine animals is designed. Daily nutrition of a bovine consists of grass, corn, straw, silage, oat, wheat and different vitamins and minerals. The amount and mixture amount of each of the nutrition depends on different parameters of the bovine. These parameters are; age, sex, weight and maternity of the bovine, also outside temperature. The problem in a farm is to constitute the correct mixture and amount of nutrition for each animal. Faulty nutrition will cause an insufficient feeding of the animal concluding in an unhealthy bovine. To solve this problem, a new automatic feeding machine is designed. Travelling of the machine is performed by four tires, which is pulled by a tractor. The carrier consists of eight bins, which each of them carries a nutrition type. Capacity of each unit is 250 kg. At the bottom of each chamber is a sensor measuring the weight of the food inside. A funnel is at the bottom of each chamber by which open/close function is controlled by a valve. Each animal will carry a RFID tag including ID on its ear. A receiver on the feeding machine will read this ID and by given previous information by the operator (veterinarian), the system will detect the amount of each nutrition unit which will be given to the selected animal for feeding. In the system, each bin will open its exit gate by the help of the valve under the control of PLC (Programmable Logic Controller). The amount of each nutrition type will be controlled by measuring the open/close time. The exit canals of the bins are collected in a reservoir. To achieve a homogenous nitration, the collected feed will be mixed by a worm gear. Further the mixture will be transported by a help of a funnel to the feeding unit of the animal. The feeding process can be performed in 100 seconds. After feeding of the animal, the tractor pulls the travelling machine to the next animal. By the help of this system animals can be feeded by right amount and mixture of nutritionKeywords: bovine, feeding, nutrition, transportation, automatic
Procedia PDF Downloads 342