Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4920

Search results for: linear complexity

3630 Productivity and Structural Design of Manufacturing Systems

Authors: Ryspek Usubamatov, Tan San Chin, Sarken Kapaeva

Abstract:

Productivity of the manufacturing systems depends on technological processes, a technical data of machines and a structure of systems. Technology is presented by the machining mode and data, a technical data presents reliability parameters and auxiliary time for discrete production processes. The term structure of manufacturing systems includes the number of serial and parallel production machines and links between them. Structures of manufacturing systems depend on the complexity of technological processes. Mathematical models of productivity rate for manufacturing systems are important attributes that enable to define best structure by criterion of a productivity rate. These models are important tool in evaluation of the economical efficiency for production systems.

Keywords: productivity, structure, manufacturing systems, structural design

Procedia PDF Downloads 585
3629 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
3628 Interdisciplinary Method Development - A Way to Realize the Full Potential of Textile Resources

Authors: Nynne Nørup, Julie Helles Eriksen, Rikke M. Moalem, Else Skjold

Abstract:

Despite a growing focus on the high environmental impact of textiles, textile waste is only recently considered as part of the waste field. Consequently, there is a general lack of knowledge and data within this field. Particularly the lack of a common perception of textiles generates several problems e.g., to recognize the full material potential the fraction contains, which is cruel if the textile must enter the circular economy. This study aims to qualify a method to make the resources in textile waste visible in a way that makes it possible to move them as high up in the waste hierarchy as possible. Textiles are complex and cover many different types of products, fibers and combinations of fibers and production methods. In garments alone, there is a great variety, even when narrowing it to only undergarments. However, textile waste is often reduced to one fraction, assessed solely by quantity, and compared to quantities of other waste fractions. Disregarding the complexity and reducing textiles to a single fraction that covers everything made of textiles increase the risk of neglecting the value of the materials, both with regards to their properties and economical. Instead of trying to fit textile waste into the current primarily linear waste system where volume is a key part of the business models, this study focused on integrating textile waste as a resource in the design and production phase. The study combined interdisciplinary methods for determining replacement rates used in Life Cycle Assessments and Mass Flow Analysis methods with the designer’s toolbox to hereby activate the properties of textile waste in a way that can unleash its potential optimally. It was hypothesized that by activating Denmark's tradition for design and high level of craftsmanship, it is possible to find solutions that can be used today and create circular resource models that reduce the use of virgin fibers. Through waste samples, case studies, and testing of various design approaches, this study explored how to functionalize the method so that the product after the end-use is kept as a material and only then processed at fiber level to obtain the best environmental utilization. The study showed that the designers' ability to decode the properties of the materials and understanding of craftsmanship were decisive for how well the materials could be utilized today. The later in the life cycle the textiles appeared as waste, the more demanding the description of the materials to be sufficient, especially if to achieve the best possible use of the resources and thus a higher replacement rate. In addition, it also required adaptation in relation to the current production because the materials often varied more. The study found good indications that part of the solution is to use geodata i.e., where in the life cycle the materials were discarded. An important conclusion is that a fully developed method can help support better utilization of textile resources. However, it stills requires a better understanding of materials by the designers, as well as structural changes in business and society.

Keywords: circular economy, development of sustainable processes, environmental impacts, environmental management of textiles, environmental sustainability through textile recycling, interdisciplinary method development, resource optimization, recycled textile materials and the evaluation of recycling, sustainability and recycling opportunities in the textile and apparel sector

Procedia PDF Downloads 95
3627 Determinants of Investment in Vaca Muerta, Argentina

Authors: Ivan Poza Martínez

Abstract:

The international energy landscape has been significantly affected by the Covid-19 pandemic and te conflict in Ukraine. The Vaca Muerta sedimentary formation in Argentina´s Neuquén province has become a crucial area for energy production, specifically in the shale gas ad shale oil sectors. The massive investment required for theexploitation of this reserve make it essential to understand te determinants of the investment in the upstream sector at both local ad international levels. The aim of this study is to identify the qualitative and quantitative determinants of investment in Vaca Muerta. The research methodolody employs both quantiative ( econometrics ) and qualitative approaches. A linear regression model is used to analyze the impact in non-conventional hydrocarbons. The study highlights that, in addition to quantitative factors, qualitative variables, particularly the design of a regulatory framework, significantly influence the level of the investment in Vaca Muerta. The analysis reveals the importance of attracting both domestic and foreign capital investment. This research contributes to understanding the factors influencing investment inthe Vaca Muerta regioncomapred to other published studies. It emphasizes to role of qualitative varibles, such as regulatory frameworks, in the development of the shale gas and oil sectors. The study uses a combination ofquantitative data , such a investment figures, and qualitative data, such a regulatory frameworks. The data is collected from various rpeorts and industry publications. The linear regression model is used to analyze the relationship between the variables and the investment in Vaca Muerta. The research addresses the question of what factors drive investment in the Vaca Muerta region, both from a quantitative and qualitative perspective. The study concludes that a combination of quantitative and qualitative factors, including the design of a regulatory framework, plays a significant role in attracting investment in Vaca Muerta. It highlights the importance of these determinants in the developmentof the local energy sector and the potential economic benefits for Argentina and the Southern Cone region.

Keywords: vaca muerta, FDI, shale gas, shale oil, YPF

Procedia PDF Downloads 57
3626 A Design System for Complex Profiles of Machine Members Using a Synthetic Curve

Authors: N. Sateesh, C. S. P. Rao, K. Satyanarayana, C. Rajashekar

Abstract:

This paper proposes a development of a CAD/CAM system for complex profiles of various machine members using a synthetic curve i.e. B-spline. Conventional methods in designing and manufacturing of complex profiles are tedious and time consuming. Even programming those on a computer numerical control (CNC) machine can be a difficult job because of the complexity of the profiles. The system developed provides graphical and numerical representation B-spline profile for any given input. In this paper, the system is applicable to represent a cam profile with B-spline and attempt is made to improve the follower motion.

Keywords: plate-cams, cam profile, b-spline, computer numerical control (CNC), computer aided design and computer aided manufacturing (CAD/CAM), R-D-R-D (rise-dwell-return-dwell)

Procedia PDF Downloads 611
3625 Integrated Mass Rapid Transit System for Smart City Project in Western India

Authors: Debasis Sarkar, Jatan Talati

Abstract:

This paper is an attempt to develop an Integrated Mass Rapid Transit System (MRTS) for a smart city project in Western India. Integrated transportation is one of the enablers of smart transportation for providing a seamless intercity as well as regional level transportation experience. The success of a smart city project at the city level for transportation is providing proper integration to different mass rapid transit modes by way of integrating information, physical, network of routes fares, etc. The methodology adopted for this study was primary data research through questionnaire survey. The respondents of the questionnaire survey have responded on the issues about their perceptions on the ways and means to improve public transport services in urban cities. The respondents were also required to identify the factors and attributes which might motivate more people to shift towards the public mode. Also, the respondents were questioned about the factors which they feel might restrain the integration of various modes of MRTS. Furthermore, this study also focuses on developing a utility equation for respondents with the help of multiple linear regression analysis and its probability to shift to public transport for certain factors listed in the questionnaire. It has been observed that for shifting to public transport, the most important factors that need to be considered were travel time saving and comfort rating. Also, an Integrated MRTS can be obtained by combining metro rail with BRTS, metro rail with monorail, monorail with BRTS and metro rail with Indian railways. Providing a common smart card to transport users for accessing all the different available modes would be a pragmatic solution towards integration of the available modes of MRTS.

Keywords: mass rapid transit systems, smart city, metro rail, bus rapid transit system, multiple linear regression, smart card, automated fare collection system

Procedia PDF Downloads 271
3624 Scenarios of Societal Security and Business Continuity Cycles

Authors: Jiří F. Urbánek, Jiří Barta

Abstract:

Societal security, continuity scenarios, and methodological cycling approach understands in this article. Namely, societal security organizational challenges ask implementation of international standards BS 25999-2 and global ISO 22300 which is a family of standards for business continuity management system. Efficient global organization system is distinguished of high entity´s complexity, connectivity, and interoperability, having not only cooperative relations in a fact. Competing business have numerous participating ´enemies´, which are in apparent or hidden opponent and antagonistic roles with prosperous organization systems, resulting to a crisis scene or even to a battle theater. Organization business continuity scenarios are necessary for such ´a play´ preparedness, planning, management, and overmastering in real environments.

Keywords: business continuity, societal security, crisis scenarios cycles, interoperability

Procedia PDF Downloads 385
3623 Reconstruction and Rejection of External Disturbances in a Dynamical System

Authors: Iftikhar Ahmad, A. Benallegue, A. El Hadri

Abstract:

In this paper, we have proposed an observer for the reconstruction and a control law for the rejection application of unknown bounded external disturbance in a dynamical system. The strategy of both the observer and the controller is designed like a second order sliding mode with a proportional-integral (PI) term. Lyapunov theory is used to prove the exponential convergence and stability. Simulations results are given to show the performance of this method.

Keywords: non-linear systems, sliding mode observer, disturbance rejection, nonlinear control

Procedia PDF Downloads 334
3622 Lightweight Synergy IoT Framework for Smart Home Healthcare for the Elderly

Authors: Huawei Ma, Wencai Du, Shengbin Liang

Abstract:

Smart Home Healthcare technologies for the elderly represent a transformative paradigm that leverages emerging technologies to provide the elderly’ health indicators and daily life monitoring, emergency calls, environmental monitoring, behavior perception, and other services to ensure the health and safety of the elderly who are aging in their own home. However, the excessive complexity in the main adopted framework has affected the acceptance and adoption of the elderly. Therefore, this paper proposes a lightweight synergy architecture of IoT data and service for elderly home smart health environment. It includes the modeling of IoT applications and their workflows, data interoperability, interaction, and storage paradigms to meet the growing needs of older people so that they can lead an active, fulfilling, and quality life.

Keywords: smart home healthcare, IoT, independent living, lightweight framework

Procedia PDF Downloads 53
3621 The Effects of the Waste Plastic Modification of the Asphalt Mixture on the Permanent Deformation

Authors: Soheil Heydari, Ailar Hajimohammadi, Nasser Khalili

Abstract:

The application of plastic waste for asphalt modification is a sustainable strategy to deal with the enormous plastic waste generated each year and enhance the properties of asphalt. The modification is either practiced by the dry process or the wet process. In the dry process, plastics are added straight into the asphalt mixture, and in the wet process, they are mixed and digested into bitumen. In this article, the effects of plastic inclusion in asphalt mixture, through the dry process, on the permanent deformation of the asphalt are investigated. The main waste plastics that are usually used in asphalt modification are taken into account, which is linear, low-density polyethylene, low-density polyethylene, high-density polyethylene, and polypropylene. Also, to simulate a plastic waste stream, different grades of each virgin plastic are mixed and used. For instance, four different grades of polypropylene are mixed and used as representative of polypropylene. A precisely designed mixing condition is considered to dry-mix the plastics into the mixture such that the polymer was melted and modified by the later introduced binder. In this mixing process, plastics are first added to the hot aggregates and mixed three times in different time intervals, then bitumen is introduced, and the whole mixture is mixed three times in fifteen minutes intervals. Marshall specimens were manufactured, and dynamic creep tests were conducted to evaluate the effects of modification on the permanent deformation of the asphalt mixture. Dynamic creep is a common repeated loading test conducted at different stress levels and temperatures. Loading cycles are applied to the AC specimen until failure occurs; with the amount of deformation constantly recorded, the cumulative, permanent strain is determined and reported as a function of the number of cycles. The results of this study showed that the dry inclusion of the waste plastics is very effective in enhancing the resistance against permanent deformation of the mixture. However, the mixing process must be precisely engineered to melt the plastics, and a homogenous mixture is achieved.

Keywords: permanent deformation, waste plastics, low-density polyethene, high-density polyethene, polypropylene, linear low-density polyethene, dry process

Procedia PDF Downloads 88
3620 Gender, Age, and Race Differences in Self-Reported Reading Attitudes of College Students

Authors: Jill Villarreal, Kristalyn Cooksey, Kai Lloyd, Daniel Ha

Abstract:

Little research has been conducted to examine college students' reading attitudes, including students' perceptions of reading behaviors and reading abilities. This is problematic, as reading assigned course material is a critical component to an undergraduate student's academic success. For this study, flyers were electronically disseminated to instructors at 24 public and 10 private U.S. institutions in “Reading-Intensive Departments” including Psychology, Sociology, Education, Business, and Communications. We requested the online survey be completed as an in-class activity during the fall 2019 and spring 2020 semesters. All participants voluntarily completed the questionnaire anonymously. Of the participants, 280 self-identified their race as Black and 280 self-identified their race as White. Of the participants, 177 self-identified their gender as Male and 383 self-identified their Gender as Female. Participants ranged in age from 18-24. Factor analysis found four dimensions resulting from the questions regarding reading. The first we interpret as “Reading Proficiency”, accounted for 19% of the variability. The second dimension was “Reading Anxiety” (15%), the third was “Textbook Reading Ability” (9%), and the fourth was “Reading Enjoyment” (8%). Linear models on each of these dimensions revealed no effect of Age, Gender, Race, or Income on “Reading proficiency”. The linear model of “Reading Anxiety” showed a significant effect of race (p = 0.02), with higher anxiety in white students, as well as higher reading anxiety in female students (p < 0.001). The model of “Textbook Reading Ability” found a significant effect of race (p < 0.001), with higher textbook problems in white students. The model of “Reading Enjoyment” showed significant effects of race (p = 0.013) with more enjoyment for white students, gender (p = 0.001) with higher enjoyment for female students, and age (p = 0.033) with older students showing higher enjoyment. These findings suggest that gender, age, and race are important factors in many aspects of college students' reading attitudes. Further research will investigate possible causes for these differences. In addition, the effectiveness of college-level programs to reduce reading anxiety, promote the reading of textbooks, and foster a love of reading will be assessed.

Keywords: age, college, gender, race, reading

Procedia PDF Downloads 152
3619 Modbus Gateway Design Using Arm Microprocessor

Authors: Semanur Savruk, Onur Akbatı

Abstract:

Integration of various communication protocols into an automation system causes a rise in setup and maintenance cost and make to control network devices in difficulty. The gateway becomes necessary for reducing complexity in network topology. In this study, Modbus RTU/Modbus TCP industrial ethernet gateway design and implementation are presented with ARM embedded system and FreeRTOS real-time operating system. The Modbus gateway can perform communication with Modbus RTU and Modbus TCP devices over itself. Moreover, the gateway can be adjustable with the user-interface application or messaging interface. Conducted experiments and the results are presented in the paper. Eventually, the proposed system is a complete, low-cost, real-time, and user-friendly design for monitoring and setting devices and useful for meeting remote control purposes.

Keywords: gateway, industrial communication, modbus, network

Procedia PDF Downloads 141
3618 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
3617 A Novel Design Methodology for a 1.5 KW DC/DC Converter in EV and Hybrid EV Applications

Authors: Farhan Beg

Abstract:

This paper presents a method for the efficient implementation of a unidirectional or bidirectional DC/DC converter. The DC/DC converter is used essentially for energy exchange between the low voltage service battery and a high voltage battery commonly found in Electric Vehicle applications. In these applications, apart from cost, efficiency of design is an important characteristic. A useful way to reduce the size of electronic equipment in the electric vehicles is proposed in this paper. The technique simplifies the mechanical complexity and maximizes the energy usage using the latest converter control techniques. Moreover a bidirectional battery charger for hybrid electric vehicles is also implemented in this paper. Several simulations on the test system have been carried out in Matlab/Simulink environment. The results exemplify the robustness of the proposed design methodology in case of a 1.5 KW DC-DC converter.

Keywords: DC-DC converters, electric vehicles, power electronics, direct current control

Procedia PDF Downloads 727
3616 Supply Chain Fit and Firm Performance: The Role of the Environment

Authors: David Gligor

Abstract:

The purpose of this study was to build on Fisher's (1997) seminal article. First, it sought to determine how companies can achieve supply chain fit (i.e., match between the products' characteristics and the underlying supply chain design). Second, it attempted to develop a better understanding of how environmental conditions impact the relationship between supply chain fit and performance. The findings indicate that firm supply chain agility allows organizations to quickly adjust the structure of their supply chains and therefore, achieve supply chain fit. In addition, archival and survey data were used to explore the moderating effects of six environmental uncertainty dimensions: munificence, market dynamism, technological dynamism, technical complexity, product diversity, and geographic dispersion. All environmental variables, except technological dynamism, were found to impact the relationship between supply chain fit and firm performance.

Keywords: supply chain fit, environmental uncertainty, supply chain agility, management engineering

Procedia PDF Downloads 599
3615 Development of the Family Capacity of Management of Patients with Autism Spectrum Disorder Diagnosis

Authors: Marcio Emilio Dos Santos, Kelly C. F. Dos Santos

Abstract:

Caregivers of patients diagnosed with ASD are subjected to high stress situations due to the complexity and multiple levels of daily activities that require the organization of events, behaviors and socioemotional situations, such as immediate decision making and in public spaces. The cognitive and emotional requirement needed to fulfill this caregiving role exceeds the regular cultural process that adults receive in their process of preparation for conjugal and parental life. Therefore, in many cases, caregivers present a high level of overload, poor capacity to organize and mediate the development process of the child or patient about their care. Aims: Improvement in the cognitive and emotional capacities related to the caregiver function, allowing the reduction of the overload, the feeling of incompetence and the characteristic level of stress, developing a more organized conduct and decision making more oriented towards the objectives and procedural gains necessary for the integral development of the patient with diagnosis of ASD. Method: The study was performed with 20 relatives, randomly selected from a total of 140 patients attended. The family members were submitted to the Wechsler Adult Intelligence Scale III intelligence test and the Family assessment Management Measure (FaMM) questionnaire as a previous evaluation. Therapeutic activity in a small group of family members or caregivers, with weekly frequency, with a minimum workload of two hours, using the Feuerstein Instrumental Enrichment Cognitive Development Program - Feuerstein Instrumental Enrichment for ten months. Reapplication of the previous tests to verify the gains obtained. Results and Discussion: There is a change in the level of caregiver overload, improvement in the results of the Family assessment Management Measure and highlight to the increase of performance in the cognitive aspects related to problem solving, planned behavior and management of behavioral crises. These results lead to the discussion of the need to invest in the integrated care of patients and their caregivers, mainly by enabling cognitively to deal with the complexity of Autism. This goes beyond the simple therapeutic orientation about adjustments in family and school routines. The study showed that when the caregiver improves his/her capacity of management, the results of the treatment are potentiated and there is a reduction of the level of the caregiver's overload. Importantly, the study was performed for only ten months and the number of family members attended in the study (n = 20) needs to be expanded to have statistical strength.

Keywords: caregiver overload, cognitive development program ASD caregivers, feuerstein instrumental enrichment, family assessment management measure

Procedia PDF Downloads 129
3614 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL

Authors: A. Soni, D. R. Mishra, D. K. Koul

Abstract:

α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.

Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL

Procedia PDF Downloads 300
3613 Algorithmic Fault Location in Complex Gas Networks

Authors: Soban Najam, S. M. Jahanzeb, Ahmed Sohail, Faraz Idris Khan

Abstract:

With the recent increase in reliance on Gas as the primary source of energy across the world, there has been a lot of research conducted on gas distribution networks. As the complexity and size of these networks grow, so does the leakage of gas in the distribution network. One of the most crucial factors in the production and distribution of gas is UFG or Unaccounted for Gas. The presence of UFG signifies that there is a difference between the amount of gas distributed, and the amount of gas billed. Our approach is to use information that we acquire from several specified points in the network. This information will be used to calculate the loss occurring in the network using the developed algorithm. The Algorithm can also identify the leakages at any point of the pipeline so we can easily detect faults and rectify them within minimal time, minimal efforts and minimal resources.

Keywords: FLA, fault location analysis, GDN, gas distribution network, GIS, geographic information system, NMS, network Management system, OMS, outage management system, SSGC, Sui Southern gas company, UFG, unaccounted for gas

Procedia PDF Downloads 627
3612 Growing Pains and Organizational Development in Growing Enterprises: Conceptual Model and Its Empirical Examination

Authors: Maciej Czarnecki

Abstract:

Even though growth is one of the most important strategic objectives for many enterprises, we know relatively little about this phenomenon. This research contributes to broaden our knowledge of managerial consequences of growth. Scales for measuring organizational development and growing pains were developed. Conceptual model of connections among growth, organizational development, growing pains, selected development factors and financial performance were examined. The research process contained literature review, 20 interviews with managers, examination of 12 raters’ opinions, pilot research and 7 point Likert scale questionnaire research on 138 Polish enterprises employing 50-249 people which increased their employment at least by 50% within last three years. Factor analysis, Pearson product-moment correlation coefficient, student’s t-test and chi-squared test were used to develop scales. High Cronbach’s alpha coefficients were obtained. The verification of correlations among the constructs was carried out with factor correlations, multiple regressions and path analysis. When the enterprise grows, it is necessary to implement changes in its structure, management practices etc. (organizational development) to meet challenges of growing complexity. In this paper, organizational development was defined as internal changes aiming to improve the quality of existing or to introduce new elements in the areas of processes, organizational structure and culture, operational and management systems. Thus; H1: Growth has positive effects on organizational development. The main thesis of the research is that if organizational development does not catch up with growing complexity of growing enterprise, growing pains will arise (lower work comfort, conflicts, lack of control etc.). They will exert a negative influence on the financial performance and may result in serious organizational crisis or even bankruptcy. Thus; H2: Growth has positive effects on growing pains, H3: Organizational development has negative effects on growing pains, H4: Growing pains have negative effects on financial performance, H5: Organizational development has positive effects on financial performance. Scholars considered long lists of factors having potential influence on organizational development. The development of comprehensive model taking into account all possible variables may be beyond the capacity of any researcher or even statistical software used. After literature review, it was decided to increase the level of abstraction and to include following constructs in the conceptual model: organizational learning (OL), positive organization (PO) and high performance factors (HPF). H1a/b/c: OL/PO/HPF has positive effect on organizational development, H2a/b/c: OL/PO/HPF has negative effect on growing pains. The results of hypothesis testing: H1: partly supported, H1a/b/c: supported/not supported/supported, H2: not supported, H2a/b/c: not supported/partly supported/not supported, H3: supported, H4: partly supported, H5: supported. The research seems to be of a great value for both scholars and practitioners. It proved that OL and HPO matter for organizational development. Scales for measuring organizational development and growing pains were developed. Its main finding, though, is that organizational development is a good way of improving financial performance.

Keywords: organizational development, growth, growing pains, financial performance

Procedia PDF Downloads 219
3611 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 392
3610 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 142
3609 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia

Authors: S. Masud, J. Merson, D. F. Robinson

Abstract:

The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.

Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions

Procedia PDF Downloads 286
3608 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method

Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary

Abstract:

Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.

Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method

Procedia PDF Downloads 430
3607 Analysis of Energy Flows as An Approach for The Formation of Monitoring System in the Sustainable Regional Development

Authors: Inese Trusina, Elita Jermolajeva

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the developmenton the way to social well-being in the frame of the ecological economics paradigm. The article presentsbasic definitions for the development of formalized description of sustainabledevelopment monitoring. It provides examples of calculating the parameters of monitoring for the Baltic Sea region countries and their primary interpretation.

Keywords: sustainability, development, power, ecological economics, regional economic, monitoring

Procedia PDF Downloads 120
3606 Energy Saving Techniques for MIMO Decoders

Authors: Zhuofan Cheng, Qiongda Hu, Mohammed El-Hajjar, Basel Halak

Abstract:

Multiple-input multiple-output (MIMO) systems can allow significantly higher data rates compared to single-antenna-aided systems. They are expected to be a prominent part of the 5G communication standard. However, these decoders suffer from high power consumption. This work presents a design technique in order to improve the energy efficiency of MIMO systems; this facilitates their use in the next generation of battery-operated communication devices such as mobile phones and tablets. The proposed optimization approach consists of the use of low complexity lattice reduction algorithm in combination with an adaptive VLSI implementation. The proposed design has been realized and verified in 65nm technology. The results show that the proposed design is significantly more energy-efficient than conventional K-best MIMO systems.

Keywords: energy, lattice reduction, MIMO, VLSI

Procedia PDF Downloads 330
3605 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 53
3604 Evaluating Performance of an Anomaly Detection Module with Artificial Neural Network Implementation

Authors: Edward Guillén, Jhordany Rodriguez, Rafael Páez

Abstract:

Anomaly detection techniques have been focused on two main components: data extraction and selection and the second one is the analysis performed over the obtained data. The goal of this paper is to analyze the influence that each of these components has over the system performance by evaluating detection over network scenarios with different setups. The independent variables are as follows: the number of system inputs, the way the inputs are codified and the complexity of the analysis techniques. For the analysis, some approaches of artificial neural networks are implemented with different number of layers. The obtained results show the influence that each of these variables has in the system performance.

Keywords: network intrusion detection, machine learning, artificial neural network, anomaly detection module

Procedia PDF Downloads 343
3603 Applying Systems Thinking and a System of Systems Approach to Facilitate Sustainable Grid Integration of Variable Renewable Energy

Authors: Edward B. Ssekulima, Amir Etemadi

Abstract:

This paper presents a Systems Thinking and System of Systems (SoS) viewpoint for managing requirements complexity in the grid integration of Variable Renewable Energy (VRE). To achieve a SoS approach, it is often necessary to inculcate a Systems Thinking (ST) perspective in the planning and design of the attendant system. We show how this approach can support the enhanced integration of VRE (wind, solar small hydro) for which intermittency is a key inhibiting factor to their sustainable grid integration. The results indicate that a ST and SoS approach are a critical tool for decision makers in the planning, design and deployment of VRE Sources for their sustainable grid-integration in accordance with relevant techno-economic, social and environmental requirements.

Keywords: sustainable grid-integration, system of systems, systems thinking, variable energy resources

Procedia PDF Downloads 126
3602 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 344
3601 Analysis of Various Copy Move Image Forgery Techniques for Better Detection Accuracy

Authors: Grishma D. Solanki, Karshan Kandoriya

Abstract:

In modern era of information age, digitalization has revolutionized like never before. Powerful computers, advanced photo editing software packages and high resolution capturing devices have made manipulation of digital images incredibly easy. As per as image forensics concerns, one of the most actively researched area are detection of copy move forgeries. Higher computational complexity is one of the major component of existing techniques to detect such tampering. Moreover, copy move forgery is usually performed in three steps. First, copying of a region in an image then pasting the same one in the same respective image and finally doing some post-processing like rotation, scaling, shift, noise, etc. Consequently, pseudo Zernike moment is used as a features extraction method for matching image blocks and as a primary factor on which performance of detection algorithms depends.

Keywords: copy-move image forgery, digital forensics, image forensics, image forgery

Procedia PDF Downloads 288