Search results for: Systems Approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8475

Search results for: Systems Approach

285 Information Security Risk Management in IT-Based Process Virtualization: A Methodological Design Based on Action Research

Authors: Jefferson Camacho Mejía, Jenny Paola Forero Pachón, Luis Carlos Gómez Flórez

Abstract:

Action research is a qualitative research methodology, which leads the researcher to delve into the problems of a community in order to understand its needs in depth and finally, to propose actions that lead to a change of social paradigm. Although this methodology had its beginnings in the human sciences, it has attracted increasing interest and acceptance in the field of information systems research since the 1990s. The countless possibilities offered nowadays by the use of Information Technologies (IT) in the development of different socio-economic activities have meant a change of social paradigm and the emergence of the so-called information and knowledge society. According to this, governments, large corporations, small entrepreneurs and in general, organizations of all kinds are using IT to virtualize their processes, taking them from the physical environment to the digital environment. However, there is a potential risk for organizations related with exposing valuable information without an appropriate framework for protecting it. This paper shows progress in the development of a methodological design to manage the information security risks associated with the IT-based processes virtualization, by applying the principles of the action research methodology and it is the result of a systematic review of the scientific literature. This design consists of seven fundamental stages. These are distributed in the three stages described in the action research methodology: 1) Observe, 2) Analyze and 3) Take actions. Finally, this paper aims to offer an alternative tool to traditional information security management methodologies with a view to being applied specifically in the planning stage of IT-based process virtualization in order to foresee risks and to establish security controls before formulating IT solutions in any type of organization.

Keywords: Action research, information security, information technology, methodological design, process virtualization, risk management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 971
284 Military Court’s Jurisdiction over Military Members Who Commit General Crimes under Indonesian Military Judiciary System in Comparison with Other Countries

Authors: Dini Dewi Heniarti

Abstract:

The importance of this study is to understand how Indonesian military court asserts its jurisdiction over military members who commit general crimes within the Indonesian military judiciary system in comparison to other countries. This research employs a normative-juridical approach in combination with historical and comparative-juridical approaches. The research specification is analytical-descriptive in nature, i.e. describing or outlining the principles, basic concepts, and norms related to military judiciary system, which are further analyzed within the context of implementation and as the inputs for military justice regulation under the Indonesian legal system. Main data used in this research are secondary data, including primary, secondary and tertiary legal sources. The research focuses on secondary data, while primary data are supplementary in nature. The validity of data is checked using multi-methods commonly known as triangulation, i.e. to reflect the efforts to gain an in-depth understanding of phenomena being studied. Here, the military element is kept intact in the judiciary process with due observance of the Military Criminal Justice System and the Military Command Development Principle. The Indonesian military judiciary jurisdiction over military members committing general crimes is based on national legal system and global development while taking into account the structure, composition and position of military forces within the state structure. Jurisdiction is formulated by setting forth the substantive norm of crimes that are military in nature. At the level of adjudication jurisdiction, the military court has a jurisdiction to adjudicate military personnel who commit general offences. At the level of execution jurisdiction, the military court has a jurisdiction to execute the sentence against military members who have been convicted with a final and binding judgement. Military court's jurisdiction needs to be expanded when the country is in the state of war.

Keywords: Military courts, Jurisdiction, Military members, Military justice system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2441
283 Liability Aspects Related to Genetically Modified Food under the Food Safety Legislation in India

Authors: S. K. Balashanmugam, Padmavati Manchikanti, S. R. Subramanian

Abstract:

The question of legal liability over injury arising out of the import and the introduction of GM food emerges as a crucial issue confronting to promote GM food and its derivatives. There is a greater possibility of commercialized GM food from the exporting country to enter importing country where status of approval shall not be same. This necessitates the importance of fixing a liability mechanism to discuss the damage, if any, occurs at the level of transboundary movement or at the market. There was a widespread consensus to develop the Cartagena Protocol on Biosafety and to give for a dedicated regime on liability and redress in the form of Nagoya Kuala Lumpur Supplementary Protocol on the Liability and Redress (‘N-KL Protocol’) at the international context. The national legal frameworks based on this protocol are not adequately established in the prevailing food legislations of the developing countries. The developing economy like India is willing to import GM food and its derivatives after the successful commercialization of Bt Cotton in 2002. As a party to the N-KL Protocol, it is indispensable for India to formulate a legal framework and to discuss safety, liability, and regulatory issues surrounding GM foods in conformity to the provisions of the Protocol. The liability mechanism is also important in the case where the risk assessment and risk management is still in implementing stage. Moreover, the country is facing GM infiltration issues with its neighbors Bangladesh. As a precautionary approach, there is a need to formulate rules and procedure of legal liability to discuss any kind of damage occurs at transboundary trade. In this context, the proposed work will attempt to analyze the liability regime in the existing Food Safety and Standards Act, 2006 from the applicability and domestic compliance and to suggest legal and policy options for regulatory authorities.

Keywords: Commercialisation, food safety, FSSAI, genetically modified foods, India, liability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2236
282 Appropriate Technology: Revisiting the Movement in Developing Countries for Sustainability

Authors: Jayshree Patnaik, Bhaskar Bhowmick

Abstract:

The economic growth of any nation is steered and dependent on innovation in technology. It can be preferably argued that technology has enhanced the quality of life. Technology is linked both with an economic and a social structure. But there are some parts of the world or communities which are yet to reap the benefits of technological innovation. Business and organizations are now well equipped with cutting-edge innovations that improve the firm performance and provide them with a competitive edge, but rarely does it have a positive impact on any community which is weak and marginalized. In recent times, it is observed that communities are actively handling social or ecological issues with the help of indigenous technologies. Thus, "Appropriate Technology" comes into the discussion, which is quite prevalent in the rural third world. Appropriate technology grew as a movement in the mid-1970s during the energy crisis, but it lost its stance in the following years when people started it to describe it as an inferior technology or dead technology. Basically, there is no such technology which is inferior or sophisticated for a particular region. The relevance of appropriate technology lies in penetrating technology into a larger and weaker section of community where the “Bottom of the pyramid” can pay for technology if they find the price is affordable. This is a theoretical paper which primarily revolves around how appropriate technology has faded and again evolved in both developed and developing countries. The paper will try to focus on the various concepts, history and challenges faced by the appropriate technology over the years. Appropriate technology follows a documented approach but lags in overall design and diffusion. Diffusion of technology into the poorer sections of community remains unanswered until the present time. Appropriate technology is multi-disciplinary in nature; therefore, this openness allows having a varied working model for different problems. Appropriate technology is a friendly technology that seeks to improve the lives of people in a constraint environment by providing an affordable and sustainable solution. Appropriate technology needs to be defined in the era of modern technological advancement for sustainability.

Keywords: Appropriate technology, community, developing country, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1867
281 Carbamazepine Co-crystal Screening with Dicarboxylic Acids Co-Crystal Formers

Authors: Syarifah Abd Rahim, Fatinah Ab Rahman, Engku N. E. M. Nasir, Noor A. Ramle

Abstract:

Co-crystal is believed to improve the solubility and dissolution rates and thus, enhanced the bioavailability of poor water soluble drugs particularly during the oral route of administration. With the existing of poorly soluble drugs in pharmaceutical industry, the screening of co-crystal formation using carbamazepine (CBZ) as a model drug compound with dicarboxylic acids co-crystal formers (CCF) namely fumaric (FA) and succinic (SA) acids in ethanol has been studied. The co-crystal formations were studied by varying the mol ratio values of CCF to CBZ to access the effect of CCF concentration on the formation of the co-crystal. Solvent evaporation, slurry and cooling crystallization which representing the solution based method co-crystal screening were used. Based on the differential scanning calorimetry (DSC) analysis, the melting point of CBZ-SA in different ratio was in the range between 188oC-189oC. For CBZ-FA form A and CBZ-FA form B the melting point in different ratio were in the range of 174oC-175oC and 185oC-186oC respectively. The product crystal from the screening was also characterized using X-ray powder diffraction (XRPD). The XRPD pattern profile analysis has shown that the CBZ co-crystals with FA and SA were successfully formed for all ratios studied. The findings revealed that CBZ-FA co-crystal were formed in two different polymorphs. It was found that CBZ-FA form A and form B were formed from evaporation and slurry crystallization methods respectively. On the other hand, in cooling crystallization method, CBZ-FA form A was formed at lower mol ratio of CCF to CBZ and vice versa. This study disclosed that different methods and mol ratios during the co-crystal screening can affect the outcome of co-crystal produced such as polymorphic forms of co-crystal and thereof. Thus, it was suggested that careful attentions is needed during the screening since the co-crystal formation is currently one of the promising approach to be considered in research and development for pharmaceutical industry to improve the poorly soluble drugs.

Keywords: Carbamazepine, co-crystal, co-crystal former, dicarboxylic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912
280 The Genesis of the Anomalous Sernio Fan, Valtellina, Northern Italy

Authors: E. De Finis, P. Gattinoni, L. Scesi

Abstract:

Massive rock avalanches formed some of the largest landslide deposits on Earth and they represent one of the major geohazards in high-relief mountains. This paper interprets a very large sedimentary fan (the Sernio fan, Valtellina, Northern Italy), located 20 Km SW from Val Pola Rock avalanche (1987), as the deposit of a partial collapse of a Deep Seated Gravitational Slope Deformation (DSGSD), afterwards eroded and buried by debris flows. The proposed emplacement sequence has been reconstructed based on geomorphological, structural and mechanical evidences. The Sernio fan is actually considered anomalous with reference to the very high ratio between the fan area (≈ 4.5km2) and the basin area (≈ 3km2). The morphology of the fan area is characterised by steep slopes (dip ≈ 20%) and the fan apex is extended for 1.8 km inside the small catchment basin. This sedimentary fan was originated by a landslide that interested a part of a large deep-seated gravitational slope deformation, involving a wide area of about 55 km². The main controlling factor is tectonic and it is related to the proximity to regional fault systems and the consequent occurrence of fault weak rocks (GSI locally lower than 10 with compressive stress lower than 20MPa). Moreover, the fan deposit shows sedimentary evidences of recent debris flow events. The best current explanation of the Sernio fan involves an initial failure of some hundreds of Mm3. The run-out was quite limited because of the morphology of Valtellina’s valley floor, and the deposit filled the main valley forming a landslide dam, as confirmed by the lacustrine deposits detected upstream the fan. Nowadays the debris flow events represent the main hazard in the study area.

Keywords: Anomalous sedimentary fans, debris flow, deep seated gravitational slope deformation, Italy, rock avalanche.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
279 Effect of Feeding Systems on Meat Goat CLA

Authors: P. Paengkoum, A. Lukkananukool, S. Bureenok, Y. Kawamoto, Y. Imura, J. Mitchaothai, S. Paengkoum, S. Traiyakun

Abstract:

The objective of this study was to investigate the effect of tropical forage source and feeding system on fatty acid composition and antioxidant activity in meat goats. Twenty male crossbred goats (Boer x Saanen), were included in the current study and the study design was assigned to be a 2 x 3 factorial in completely randomized design. All goats were slaughtered after 120 days of experimental period. Dietary tropical roughage sources were grass (Mulata II) and legume (Verano stylo). Both types of roughage were offered to the experimental meat goat as 3 feeding regimes; cut-and-carry, silage and grazing. All goats were fed basal concentrate diet at 1.5% of body weight, and they were fed ad libitum the roughages.Chemical composition, fatty acid profile and antioxidation activity of dietary treatments in all feeding system and longissimus dorsi (LD) muscles in all groups were quantified. The results have shown that the fat content in both types of studied roughage sources ranged from about 2.0% to 4.0% of DM and the fatty acid composition of those was mainly C16:0, C18:2n6 and C18:3n3, with less proportion for C18:1n9. The free-radical scavenging activity of the Mulato II was lower than that of the Verano stylo. The free-radical scavenging activity of the Mulato II was lower than that of the Verano stylo. For LD muscle, the fatty acid composition was mainly C16:0, C18:0 and C18:1n9, with less proportion for C18:2n6. The LD muscle of the goats fed with Mulato II and the Verano stylo by grazing had highest free-radical scavenging activity, compared to those fed with cut-and-carry and silage regime, although there were rather high unsaturated fatty acids in LD muscle. Thus, feeding the meat goats with the Mulato II and Verano stylo by grazing would be beneficial effect for consumers to intake high unsaturated fatty acids and lower risk for oxidation from goat meat.

Keywords: Feeding system, goat, CLA, meat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2288
278 Fuzzy Power Controller Design for Purdue University Research Reactor-1

Authors: Oktavian Muhammad Rizki, Appiah Rita, Lastres Oscar, Miller True, Chapman Alec, Tsoukalas Lefteri H.

Abstract:

The Purdue University Research Reactor-1 (PUR-1) is a 10 kWth pool-type research reactor located at Purdue University’s West Lafayette campus. The reactor was recently upgraded to use entirely digital instrumentation and control systems. However, currently, there is no automated control system to regulate the power in the reactor. We propose a fuzzy logic controller as a form of digital twin to complement the existing digital instrumentation system to monitor and stabilize power control using existing experimental data. This work assesses the feasibility of a power controller based on a Fuzzy Rule-Based System (FRBS) by modelling and simulation with a MATLAB algorithm. The controller uses power error and reactor period as inputs and generates reactivity insertion as output. The reactivity insertion is then converted to control rod height using a logistic function based on information from the recorded experimental reactor control rod data. To test the capability of the proposed fuzzy controller, a point-kinetic reactor model is utilized based on the actual PUR-1 operation conditions and a Monte Carlo N-Particle simulation result of the core to numerically compute the neutronics parameters of reactor behavior. The Point Kinetic Equation (PKE) was employed to model dynamic characteristics of the research reactor since it explains the interactions between the spatial and time varying input and output variables efficiently. The controller is demonstrated computationally using various cases: startup, power maneuver, and shutdown. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the reactor power to follow demand power without compromising nuclear safety measures.

Keywords: Fuzzy logic controller, power controller, reactivity, research reactor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
277 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: Critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
276 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: A. Mínguez-Martínez, J. de Vicente

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. In this paper, we propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments, by applying minor changes.

Keywords: Industrial environment, confocal microscope, optical measuring instrument, traceability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419
275 Comparing Occupants’ Satisfaction in LEED Certified Office Buildings and Non LEED Certified Office Buildings - A Case Study of Office Buildings in Egypt and Turkey

Authors: Amgad A. Farghal, Dina I. El Desouki

Abstract:

Energy consumption and users’ satisfaction were compared in three LEED certified office buildings in turkey and an office building in Egypt. The field studies were conducted in summer 2012. The measured environmental parameters in the four buildings were indoor air temperature, relative humidity, CO2 percentage and light intensity. The traditional building is located in Smart Village in Abu Rawash, Cairo, Egypt. The building was studied for 7 days resulting in 84 responds. The three rated buildings are in Istanbul; Turkey. A Platinum LEED certified office building is owned by BASF and gained a platinum certificate for new construction and major renovation. The building was studied for 3 days resulting in 13 responds. A Gold LEED certified office building is owned by BASF and gained a gold certificate for new construction and major renovation. The building was studied for 2 days resulting in 10 responds. A silver LEED certified office building is owned by Unilever and gained a silver certificate for commercial interiors. The building was studied for 7 days resulting in 84 responds. The results showed that all buildings had no significant difference regarding occupants’ satisfaction with the amount of lighting, noise level, odor and access to the outdoor view. There was significant difference between occupants’ satisfaction in LEED certified buildings and the traditional building regarding the thermal environment and the perception of the general environment (colors, carpet and decoration. The findings suggest that careful design could lead to a certified building that enhances the thermal environment and the perception of the indoor environment leading to energy consumption without scarifying occupants’ satisfaction.

Keywords: Energy consumption, occupants’ satisfaction, rating systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
274 Designing for Inclusion within the Learning Management System: Social Justice, Identities, and Online Design for Digital Spaces in Higher Education

Authors: Christina Van Wingerden

Abstract:

The aim of this paper is to propose pedagogical design for learning management systems (LMS) that offers greater inclusion for students based on a number of theoretical perspectives and delineated through an example. Considering the impact of COVID-19, including on student mental health, the research suggesting the importance of student sense of belonging on retention, success, and student well-being, the author describes intentional LMS design incorporating theoretically based practices informed by critical theory, feminist theory, indigenous theory and practices, and new materiality. This article considers important aspects of these theories and practices which attend to inclusion, identities, and socially just learning environments. Additionally, increasing student sense of belonging and mental health through LMS design influenced by adult learning theory and the community of inquiry model are described.  The process of thinking through LMS pedagogical design with inclusion intentionally in mind affords the opportunity to allow LMS to go beyond course use as a repository of documents, to an intentional community of practice that facilitates belonging and connection, something much needed in our times. In virtual learning environments it has been harder to discern how students are doing, especially in feeling connected to their courses, their faculty, and their student peers. Increasingly at the forefront of public universities is addressing the needs of students with multiple and intersecting identities and the multiplicity of needs and accommodations. Education in 2020, and moving forward, calls for embedding critical theories and inclusive ideals and pedagogies to the ways instructors design and teach in online platforms. Through utilization of critical theoretical frameworks and instructional practices, students may experience the LMS as a welcoming place with intentional plans for welcoming diversity in identities.

Keywords: Belonging, critical pedagogy, instructional design, Learning Management System, LMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 843
273 A Decade of Creating an Alternative Banking System in Tanzania: The Current State of Affairs of Islamic Banks

Authors: Pradeep Kulshrestha, Maulana Ayoub Ali

Abstract:

The concept of financial inclusion has been tabled in the whole world where practitioners, academicians, policy makers and economists are working hard to look for the best possible opportunities in order to enable the whole society to be in the banking cycle. The Islamic banking system is considered to be one of the said opportunities. Countries like the United Kingdom, United States of America, Malaysia, Saudi Arabia, the whole of the United Arab Emirates and many African countries have accommodated the aspect of Islamic banking in the conventional banking system as one of the financial inclusion strategies. This paper tries to analyse the current state of affairs of the Islamic Banking system in Tanzania in order to understand the improvement of the provision of Islamic banking products and services in the said country. The paper discusses the historical background of the banking system in Tanzania, the level of penetration of banking products and services and the coming of the Islamic banking system in the country. Furthermore, the paper discusses banking regulatory bodies, legal instruments governing banking operations as well as number of legal challenges facing Islamic banking operations in the country. Following a critical literature review, the paper discovered that there is no legal instrument which talks about the introduction and provision of Islamic banking system in Tanzania. Furthermore, the Islamic banking system was considered as a banking product which is absolutely incorrect because Islamic banking is considered to be as a banking system of its own. In addition to that, it has been discovered that lack of a proper regulatory system and legal instruments to harmonize the conventional and Islamic banking systems has resulted in the closure of one Islamic window in the country, which in the end affects the credibility of the newly introduced banking system. In its conclusive remarks, the paper suggests that Tanzania should work on all legal challenges affecting the smooth operations of the Islamic banking system. This can be in a way of adopting various Islamic banking legal models which are used in countries like Malaysia and others, or a borrowing legal harmonization process which has been adopted by the UK, Uganda, Nigeria and Kenya.

Keywords: Islamic banking, Islamic Windows, regulations, banks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
272 A Face-to-Face Education Support System Capable of Lecture Adaptation and Q&A Assistance Based On Probabilistic Inference

Authors: Yoshitaka Fujiwara, Jun-ichirou Fukushima, Yasunari Maeda

Abstract:

Keys to high-quality face-to-face education are ensuring flexibility in the way lectures are given, and providing care and responsiveness to learners. This paper describes a face-to-face education support system that is designed to raise the satisfaction of learners and reduce the workload on instructors. This system consists of a lecture adaptation assistance part, which assists instructors in adapting teaching content and strategy, and a Q&A assistance part, which provides learners with answers to their questions. The core component of the former part is a “learning achievement map", which is composed of a Bayesian network (BN). From learners- performance in exercises on relevant past lectures, the lecture adaptation assistance part obtains information required to adapt appropriately the presentation of the next lecture. The core component of the Q&A assistance part is a case base, which accumulates cases consisting of questions expected from learners and answers to them. The Q&A assistance part is a case-based search system equipped with a search index which performs probabilistic inference. A prototype face-to-face education support system has been built, which is intended for the teaching of Java programming, and this approach was evaluated using this system. The expected degree of understanding of each learner for a future lecture was derived from his or her performance in exercises on past lectures, and this expected degree of understanding was used to select one of three adaptation levels. A model for determining the adaptation level most suitable for the individual learner has been identified. An experimental case base was built to examine the search performance of the Q&A assistance part, and it was found that the rate of successfully finding an appropriate case was 56%.

Keywords: Bayesian network, face-to-face education, lecture adaptation, Q&A assistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
271 The Growth of E-Commerce and Online Dispute Resolution in Developing Nations: An Analysis

Authors: Robin V. Cupido

Abstract:

Online dispute resolution has been identified in many countries as a viable alternative for resolving conflicts which have arisen in the so-called digital age. This system of dispute resolution is developing alongside the Internet, and as new types of transactions are made possible by our increased connectivity, new ways of resolving disputes must be explored. Developed nations, such as the United States of America and the European Union, have been involved in creating these online dispute resolution mechanisms from the outset, and currently have sophisticated systems in place to deal with conflicts arising in a number of different fields, such as e-commerce, domain name disputes, labour disputes and conflicts arising from family law. Specifically, in the field of e-commerce, the Internet’s borderless nature has served as a way to promote cross-border trade, and has created a global marketplace. Participation in this marketplace boosts a country’s economy, as new markets are now available, and consumers can transact from anywhere in the world. It would be especially advantageous for developing nations to be a part of this global marketplace, as it could stimulate much-needed investment in these nations, and encourage international co-operation and trade. However, for these types of transactions to proliferate, an effective system for resolving the inevitable disputes arising from such an increase in e-commerce is needed. Online dispute resolution scholarship and practice is flourishing in developed nations, and it is clear that the gap is widening between developed and developing nations in this regard. The potential for implementing online dispute resolution in developing countries has been discussed, but there are a number of obstacles that have thus far prevented its continued development. This paper aims to evaluate the various political, infrastructural and socio-economic challenges faced in developing nations, and to question how these have impacted the acceptance and development of online dispute resolution, scholarship and training of online dispute resolution practitioners and, ultimately, developing nations’ readiness to participate in cross-border e-commerce.

Keywords: Developing countries, feasibility, online dispute resolution, progress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
270 A Robust Visual SLAM for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to gather information in unknown environments to achieve simultaneous localization and mapping of the environment. This technology has a wide range of applications in autonomous driving, virtual reality, and other related fields. Currently, the research advancements related to VSLAM can maintain high accuracy in static environments. But in dynamic environments, the presence of moving objects in the scene can reduce the stability of the VSLAM system, leading to inaccurate localization and mapping, or even system failure. In this paper, a robust VSLAM method was proposed to effectively address the challenges in dynamic environments. We proposed a dynamic region removal scheme based on a semantic segmentation neural network and geometric constraints. Firstly, a semantic segmentation neural network is used to extract the prior active motion region, prior static region, and prior passive motion region in the environment. Then, the lightweight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static regions and dynamic regions. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under a high dynamic environment.

Keywords: Dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184
269 A Post Keynesian Environmental Macroeconomic Model for Agricultural Water Sustainability under Climate Change in the Murray-Darling Basin, Australia

Authors: Ke Zhao, Ballarat Colin Richardson, Jerry Courvisanos, John Crawford

Abstract:

Climate change has profound consequences for the agriculture of south-eastern Australia and its climate-induced water shortage in the Murray-Darling Basin. Post Keynesian Economics (PKE) macro-dynamics, along with Kaleckian investment and growth theory, are used to develop an ecological-economic system dynamics model of this complex nonlinear river basin system. The Murray- Darling Basin Simulation Model (MDB-SM) uses the principles of PKE to incorporate the fundamental uncertainty of economic behaviors of farmers regarding the investments they make and the climate change they face, particularly as regards water ecosystem services. MDB-SM provides a framework for macroeconomic policies, especially for long-term fiscal policy and for policy directed at the sustainability of agricultural water, as measured by socio-economic well-being considerations, which include sustainable consumption and investment in the river basin. The model can also reproduce other ecological and economic aspects and, for certain parameters and initial values, exhibit endogenous business cycles and ecological sustainability with realistic characteristics. Most importantly, MDBSM provides a platform for the analysis of alternative economic policy scenarios. These results reveal the importance of understanding water ecosystem adaptation under climate change by integrating a PKE macroeconomic analytical framework with the system dynamics modelling approach. Once parameterised and supplied with historical initial values, MDB-SM should prove to be a practical tool to provide alternative long-term policy simulations of agricultural water and socio-economic well-being.

Keywords: Agricultural water, Macroeconomic dynamics, Modeling, Investment dynamics, Sustainability, Unemployment, Economics, Keynesian, Kaleckian.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
268 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, Heritage Building Information Modeling (HBIM) is considered an efficient tool to represent and manage information of Cultural Heritage (CH). The basis of this tool relies on a 3D model generally obtained from a Cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired Level of Development (LOD), Level of Information (LOI), Grade of Generation (GOG) as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models’ families respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources, since the BIM software used has a free student license.

Keywords: Cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
267 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 964
266 Design and Development of Constant Stress Composite Cantilever Beam

Authors: Vinod B. Suryawanshi, Ajit D. Kelkar

Abstract:

Composite materials, due to their unique properties such as high strength to weight ratio, corrosion resistance, and impact resistance have huge potential as structural materials in automotive, construction and transportation applications. However, these properties often come at higher cost owing to complex design methods, difficult manufacturing processes and raw material cost. Traditionally, tapered laminated composite structures are manufactured using autoclave manufacturing process by ply drop off technique. Autoclave manufacturing though very powerful suffers from high capital investment and higher energy consumption. As per the current trends in composite manufacturing, Out of Autoclave (OoA) processes are looked as emerging technologies for manufacturing the structural composite components for aerospace and defense applications. However, there is a need for improvement among these processes to make them reliable and consistent. In this paper, feasibility of using out of autoclave process to manufacture the variable thickness cantilever beam is discussed. The minimum weight design for the composite beam is obtained using constant stress beam concept by tailoring the thickness of the beam. Ply drop off techniques was used to fabricate the variable thickness beam from glass/epoxy prepregs. Experiments were conducted to measure bending stresses along the span of the cantilever beam at different intervals by applying the concentrated load at the free end. Experimental results showed that the stresses in the bean at different intervals were constant. This proves the ability of OoA process to manufacture the constant stress beam. Finite element model for the constant stress beam was developed using commercial finite element simulation software. It was observed that the simulation results agreed very well with the experimental results and thus validated design and manufacturing approach used.

Keywords: Beams, Composites, Constant Stress, Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4395
265 Jatropha curcas L. Oil Selectivity in Froth Flotation

Authors: André C. Silva, Izabela L. A. Moraes, Elenice M. S. Silva, Carlos M. Silva Filho

Abstract:

In Brazil, most soils are acidic and low in essential nutrients required for the growth and development of plants, making fertilizers essential for agriculture. As the biggest producer of soy in the world and a major producer of coffee, sugar cane and citrus fruits, Brazil is a large consumer of phosphate. Brazilian’s phosphate ores are predominantly from igneous rocks showing a complex mineralogy, associated with carbonites and oxides, typically iron, silicon and barium. The adopted industrial concentration circuit for this type of ore is a mix between magnetic separation (both low and high field) to remove the magnetic fraction and a froth flotation circuit composed by a reverse flotation of apatite (barite’s flotation) followed by direct flotation circuit (rougher, cleaner and scavenger circuit). Since the 70’s fatty acids obtained from vegetable oils are widely used as lower-cost collectors in apatite froth flotation. This is a very effective approach to the apatite family of minerals, being that this type of collector is both selective and efficient (high recovery). This paper presents Jatropha curcas L. oil (JCO) as a renewable and sustainable source of fatty acids with high selectivity in froth flotation of apatite. JCO is considerably rich in fatty acids such as linoleic, oleic and palmitic acid. The experimental campaign involved 216 tests using a modified Hallimond tube and two different minerals (apatite and quartz). In order to be used as a collector, the oil was saponified. The results found were compared with the synthetic collector, Fotigam 5806 produced by Clariant, which is composed mainly by soy oil. JCO showed the highest selectivity for apatite flotation with cold saponification at pH 8 and concentration of 2.5 mg/L. In this case, the mineral recovery was around 95%.

Keywords: Froth flotation, Jatropha curcas L., microflotation, selectivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1148
264 Knowledge Management Strategies within a Corporate Environment of Papers

Authors: Daniel J. Glauber

Abstract:

Knowledge transfer between personnel could benefit an organization’s improved competitive advantage in the marketplace from a strategic approach to knowledge management. The lack of information sharing between personnel could create knowledge transfer gaps while restricting the decision-making processes. Knowledge transfer between personnel can potentially improve information sharing based on an implemented knowledge management strategy. An organization’s capacity to gain more knowledge is aligned with the organization’s prior or existing captured knowledge. This case study attempted to understand the overall influence of a KMS within the corporate environment and knowledge exchange between personnel. The significance of this study was to help understand how organizations can improve the Return on Investment (ROI) of a knowledge management strategy within a knowledge-centric organization. A qualitative descriptive case study was the research design selected for this study. The lack of information sharing between personnel may create knowledge transfer gaps while restricting the decision-making processes. Developing a knowledge management strategy acceptable at all levels of the organization requires cooperation in support of a common organizational goal. Working with management and executive members to develop a protocol where knowledge transfer becomes a standard practice in multiple tiers of the organization. The knowledge transfer process could be measurable when focusing on specific elements of the organizational process, including personnel transition to help reduce time required understanding the job. The organization studied in this research acknowledged the need for improved knowledge management activities within the organization to help organize, retain, and distribute information throughout the workforce. Data produced from the study indicate three main themes including information management, organizational culture, and knowledge sharing within the workforce by the participants. These themes indicate a possible connection between an organizations KMS, the organizations culture, knowledge sharing, and knowledge transfer.

Keywords: Knowledge management strategies, knowledge transfer, knowledge management, knowledge capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
263 Applicability of Linearized Model of Synchronous Generator for Power System Stability Analysis

Authors: J. Ritonja, B. Grcar

Abstract:

For the synchronous generator simulation and analysis and for the power system stabilizer design and synthesis a mathematical model of synchronous generator is needed. The model has to accurately describe dynamics of oscillations, while at the same time has to be transparent enough for an analysis and sufficiently simplified for design of control system. To study the oscillations of the synchronous generator against to the rest of the power system, the model of the synchronous machine connected to an infinite bus through a transmission line having resistance and inductance is needed. In this paper, the linearized reduced order dynamic model of the synchronous generator connected to the infinite bus is presented and analysed in details. This model accurately describes dynamics of the synchronous generator only in a small vicinity of an equilibrium state. With the digression from the selected equilibrium point the accuracy of this model is decreasing considerably. In this paper, the equations’ descriptions and the parameters’ determinations for the linearized reduced order mathematical model of the synchronous generator are explained and summarized and represent the useful origin for works in the areas of synchronous generators’ dynamic behaviour analysis and synchronous generator’s control systems design and synthesis. The main contribution of this paper represents the detailed analysis of the accuracy of the linearized reduced order dynamic model in the entire synchronous generator’s operating range. Borders of the areas where the linearized reduced order mathematical model represents accurate description of the synchronous generator’s dynamics are determined with the systemic numerical analysis. The thorough eigenvalue analysis of the linearized models in the entire operating range is performed. In the paper, the parameters of the linearized reduced order dynamic model of the laboratory salient poles synchronous generator were determined and used for the analysis. The theoretical conclusions were confirmed with the agreement of experimental and simulation results.

Keywords: Eigenvalue analysis, mathematical model, power system stability, synchronous generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
262 Personnel Selection Based on Step-Wise Weight Assessment Ratio Analysis and Multi-Objective Optimization on the Basis of Ratio Analysis Methods

Authors: Emre Ipekci Cetin, Ebru Tarcan Icigen

Abstract:

Personnel selection process is considered as one of the most important and most difficult issues in human resources management. At the stage of personnel selection, the applicants are handled according to certain criteria, the candidates are dealt with, and efforts are made to select the most appropriate candidate. However, this process can be more complicated in terms of the managers who will carry out the staff selection process. Candidates should be evaluated according to different criteria such as work experience, education, foreign language level etc. It is crucial that a rational selection process is carried out by considering all the criteria in an integrated structure. In this study, the problem of choosing the front office manager of a 5 star accommodation enterprise operating in Antalya is addressed by using multi-criteria decision-making methods. In this context, SWARA (Step-wise weight assessment ratio analysis) and MOORA (Multi-Objective Optimization on the basis of ratio analysis) methods, which have relatively few applications when compared with other methods, have been used together. Firstly SWARA method was used to calculate the weights of the criteria and subcriteria that were determined by the business. After the weights of the criteria were obtained, the MOORA method was used to rank the candidates using the ratio system and the reference point approach. Recruitment processes differ from sector to sector, from operation to operation. There are a number of criteria that must be taken into consideration by businesses in accordance with the structure of each sector. It is of utmost importance that all candidates are evaluated objectively in the framework of these criteria, after these criteria have been carefully selected in the selection of suitable candidates for employment. In the study, staff selection process was handled by using SWARA and MOORA methods together.

Keywords: Accommodation establishments, human resource management, MOORA, multi criteria decision making, SWARA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
261 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
260 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka

Authors: H. M. N. L. Handagiripathira

Abstract:

The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.

Keywords: Gamma spectrometry, lagoon, radioactivity, sediments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 530
259 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia

Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman

Abstract:

Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.

Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227
258 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 435
257 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: Drilling of Composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
256 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: Food (Ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501