Search results for: sensor registration problem
7191 Combating Domestic Violence in Malaysia: Issues and Challenges
Authors: Aspalella A. Rahman
Abstract:
Domestic violence is not an uncommon phenomenon throughout the world. Malaysia is no exception. However, the exact prevalence of domestic violence in Malaysia is difficult to capture due to cultural understanding and sensitivity of the issues existing in the society. This paper aims to examine the issues and problems with the law of domestic violence in Malaysia. As such, it will mainly rely on statutes as its primary sources of information. It will analyse the scope and provisions of the Penal Code as well as the Domestic Violence Act 1994. Any shortcomings and gaps in the laws will be highlighted. It is submitted that domestic violence remains a problem in Malaysia. Although many strategies and plans have been implemented in attempting to combat this social problem, it remains unresolved. This is due to the inefficient implementation of the law. Although much has been done, there is still more to be done by the Malaysian government to combat domestic violence more effectively. For this reason, significant cooperation between the law enforcement authorities, NGOs, and the community must be established.Keywords: challenges, domestic violence, issues, Malaysia
Procedia PDF Downloads 3007190 Addressing the Oracle Problem: Decentralized Authentication in Blockchain-Based Green Hydrogen Certification
Authors: Volker Wannack
Abstract:
The aim of this paper is to present a concept for addressing the Oracle Problem in the context of hydrogen production using renewable energy sources. The proposed approach relies on the authentication of the electricity used for hydrogen production by multiple surrounding actors with similar electricity generation facilities, which attest to the authenticity of the electricity production. The concept introduces an Authenticity Score assigned to each certificate, as well as a Trust Score assigned to each witness. Each certificate must be attested by different actors with a sufficient Trust Score to achieve an Authenticity Score above a predefined threshold, thereby demonstrating that the produced hydrogen is indeed "green."Keywords: hydrogen, blockchain, sustainability, structural change
Procedia PDF Downloads 627189 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions
Authors: Mohammed Salem Alzahrani
Abstract:
In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all program of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.Keywords: admission process model, assignment problem, Hungarian Method, Least Cost Method, Northwest Corner Method, SAM
Procedia PDF Downloads 4947188 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO modelKeywords: DEA, Super-efficiency, Time Lag, research activities
Procedia PDF Downloads 6557187 Modifying Assessment Modes in the Science Classroom as a Solution to Examination Malpractice
Authors: Catherine Omole
Abstract:
Examination malpractice includes acts that temper with collecting accurate results during the conduct of an examination, thereby giving undue advantage to a student over his colleagues. Even though examination malpractice has been a lingering problem, examinations may not be easy to do away with completely as it is an important feedback tool in the learning process with several other functions e.g for the purpose of selection, placement, certification and promotion. Examination malpractice has created a lot of problems such as a relying on a weak work force based on false assessment results. The question is why is this problem still persisting, despite measures that have been taken to curb this ugly trend over the years? This opinion paper has identified modifications that could help relieve the student of the examination stress and thus increase the student’s effort towards effective learning and discourage examination malpractice in the long run.Keywords: assessment, examination malpractice, learning, science classroom
Procedia PDF Downloads 2577186 Scale-Up Study of Gas-Liquid Two Phase Flow in Downcomer
Authors: Jayanth Abishek Subramanian, Ramin Dabirian, Ilias Gavrielatos, Ram Mohan, Ovadia Shoham
Abstract:
Downcomers are important conduits for multiphase flow transfer from offshore platforms to the seabed. Uncertainty in the predictions of the pressure drop of multiphase flow between platforms is often dominated by the uncertainty associated with the prediction of holdup and pressure drop in the downcomer. The objectives of this study are to conduct experimental and theoretical scale-up study of the downcomer. A 4-in. diameter vertical test section was designed and constructed to study two-phase flow in downcomer. The facility is equipped with baffles for flow area restriction, enabling interchangeable annular slot openings between 30% and 61.7%. Also, state-of-the-art instrumentation, the capacitance Wire-Mesh Sensor (WMS) was utilized to acquire the experimental data. A total of 76 experimental data points were acquired, including falling film under 30% and 61.7% annular slot opening for air-water and air-Conosol C200 oil cases as well as gas carry-under for 30% and 61.7% opening utilizing air-Conosol C200 oil. For all experiments, the parameters such as falling film thickness and velocity, entrained liquid holdup in the core, gas void fraction profiles at the cross-sectional area of the liquid column, the void fraction and the gas carry under were measured. The experimental results indicated that the film thickness and film velocity increase as the flow area reduces. Also, the increase in film velocity increases the gas entrainment process. Furthermore, the results confirmed that the increase of gas entrainment for the same liquid flow rate leads to an increase in the gas carry-under. A power comparison method was developed to enable evaluation of the Lopez (2011) model, which was created for full bore downcomer, with the novel scale-up experiment data acquired from the downcomer with the restricted area for flow. Comparison between the experimental data and the model predictions shows a maximum absolute average discrepancy of 22.9% and 21.8% for the falling film thickness and velocity, respectively; and a maximum absolute average discrepancy of 22.2% for fraction of gas carried with the liquid (oil).Keywords: two phase flow, falling film, downcomer, wire-mesh sensor
Procedia PDF Downloads 1657185 Analysis of Gas Transport and Sorption Processes in Coal under Confining Pressure Conditions
Authors: Anna Pajdak, Mateusz Kudasik, Norbert Skoczylas, Leticia Teixeira Palla Braga
Abstract:
A substantial majority of gas transport and sorption researches into coal are carried out on samples that are free of stress. In natural conditions, coal occurs at considerable depths, which often exceed 1000 meters. In such conditions, coal is subjected to geostatic pressure. Thus, in natural conditions, the sorption capacity of coal subjected to geostatic pressure can differ considerably from the sorption capacity of coal, determined in laboratory conditions, which is free of stress. The work presents the results of filtration and sorption tests of gases in coal under confining pressure conditions. The tests were carried out on the author's device, which ensures: confining pressure regulation in the range of 0-30 MPa, isobaric gas pressure conditions, and registration of changes in sample volume during its gas saturation. Based on the conducted research it was found, among others, that the sorption capacity of coal relative to CO₂ was reduced by about 15% as a result of the change in the confining pressure from 1.5 MPa to 30 MPa exerted on the sample. The same change in sample load caused a significant, more than tenfold reduction in carbon permeability to CO₂. The results confirmed that a load of coal corresponding to a hydrostatic pressure of 1000 meters underground reduces its permeability and sorption properties. These results are so important that the effect of load on the sorption properties of coal should be taken into account in laboratory studies on the applicability of CO₂ Enhanced Coal Bed Methane Recovery (CO₂-ECBM) technology.Keywords: coal, confining pressure, gas transport, sorption
Procedia PDF Downloads 1177184 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 3217183 The Concept of Art: A Redefinition or Reconstruction
Authors: Patricia Agboro
Abstract:
The definition of a concept is quite important in any philosophical discourse as it serves as a guide in the analysis of that concept. In the sciences, arriving at a consensus regarding concepts is quite easily achievable due to the nature of the discipline. Problem arises when one delves into the realm of the humanities. Discourses in the humanities are largely perspectival because the question of values come into play. Defining the concept of Art is no different as it has yielded unresolved and problematic issues arising from attempts at definition. A major problem arising from such attempt is that of exclusion of other art forms. In this paper therefore, we call for the rejection of an attempt at providing a comprehensive definition for Art since it is clear that the collection of definitions provided so far, has failed in capturing the nuances and intricacies of the infinite varieties of the art forms that there are. Rather, a more fruitful approach to philosophical discourses on Art is not to construe the theories of Art per-se but to reconstruct them as a collection of criteria for determining artistic excellence.Keywords: art, creativity, definition, reconstruction
Procedia PDF Downloads 1747182 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects
Authors: Lukas Vierus, Thomas Schuster
Abstract:
A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions
Procedia PDF Downloads 507181 On Estimating the Headcount Index by Using the Logistic Regression Estimator
Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz, Francisco J. Blanco-Encomienda
Abstract:
The problem of estimating a proportion has important applications in the field of economics, and in general, in many areas such as social sciences. A common application in economics is the estimation of the headcount index. In this paper, we define the general headcount index as a proportion. Furthermore, we introduce a new quantitative method for estimating the headcount index. In particular, we suggest to use the logistic regression estimator for the problem of estimating the headcount index. Assuming a real data set, results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the traditional estimator of the headcount index.Keywords: poverty line, poor, risk of poverty, Monte Carlo simulations, sample
Procedia PDF Downloads 4207180 Design and Implementation of Wireless Syncronized AI System for Security
Authors: Saradha Priya
Abstract:
Developing virtual human is very important to meet the challenges occurred in many applications where human find difficult or risky to perform the task. A robot is a machine that can perform a task automatically or with guidance. Robotics is generally a combination of artificial intelligence and physical machines (motors). Computational intelligence involves the programmed instructions. This project proposes a robotic vehicle that has a camera, PIR sensor and text command based movement. It is specially designed to perform surveillance and other few tasks in the most efficient way. Serial communication has been occurred between a remote Base Station, GUI Application, and PC.Keywords: Zigbee, camera, pirsensor, wireless transmission, DC motor
Procedia PDF Downloads 3467179 A Novel Exploration/Exploitation Policy Accelerating Learning In Both Stationary And Non Stationary Environment Navigation Tasks
Authors: Wiem Zemzem, Moncef Tagina
Abstract:
In this work, we are addressing the problem of an autonomous mobile robot navigating in a large, unknown and dynamic environment using reinforcement learning abilities. This problem is principally related to the exploration/exploitation dilemma, especially the need to find a solution letting the robot detect the environmental change and also learn in order to adapt to the new environmental form without ignoring knowledge already acquired. Firstly, a new action selection strategy, called ε-greedy-MPA (the ε-greedy policy favoring the most promising actions) is proposed. Unlike existing exploration/exploitation policies (EEPs) such as ε-greedy and Boltzmann, the new EEP doesn’t only rely on the information of the actual state but also uses those of the eventual next states. Secondly, as the environment is large, an exploration favoring least recently visited states is added to the proposed EEP in order to accelerate learning. Finally, various simulations with ball-catching problem have been conducted to evaluate the ε-greedy-MPA policy. The results of simulated experiments show that combining this policy with the Qlearning method is more effective and efficient compared with the ε-greedy policy in stationary environments and the utility-based reinforcement learning approach in non stationary environments.Keywords: autonomous mobile robot, exploration/ exploitation policy, large, dynamic environment, reinforcement learning
Procedia PDF Downloads 4157178 Training Undergraduate Engineering Students in Robotics and Automation through Model-Based Design Training: A Case Study at Assumption University of Thailand
Authors: Sajed A. Habib
Abstract:
Problem-based learning (PBL) is a student-centered pedagogy that originated in the medical field and has also been used extensively in other knowledge disciplines with recognized advantages and limitations. PBL has been used in various undergraduate engineering programs with mixed outcomes. The current fourth industrial revolution (digital era or Industry 4.0) has made it essential for many science and engineering students to receive effective training in advanced courses such as industrial automation and robotics. This paper presents a case study at Assumption University of Thailand, where a PBL-like approach was used to teach some aspects of automation and robotics to selected groups of undergraduate engineering students. These students were given some basic level training in automation prior to participating in a subsequent training session in order to solve technical problems with increased complexity. The participating students’ evaluation of the training sessions in terms of learning effectiveness, skills enhancement, and incremental knowledge following the problem-solving session was captured through a follow-up survey consisting of 14 questions and a 5-point scoring system. From the most recent training event, an overall 70% of the respondents indicated that their skill levels were enhanced to a much greater level than they had had before the training, whereas 60.4% of the respondents from the same event indicated that their incremental knowledge following the session was much greater than what they had prior to the training. The instructor-facilitator involved in the training events suggested that this method of learning was more suitable for senior/advanced level students than those at the freshmen level as certain skills to effectively participate in such problem-solving sessions are acquired over a period of time, and not instantly.Keywords: automation, industry 4.0, model-based design training, problem-based learning
Procedia PDF Downloads 1327177 Adaptive Beamforming with Steering Error and Mutual Coupling between Antenna Sensors
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Owing to close antenna spacing between antenna sensors within a compact space, a part of data in one antenna sensor would outflow to other antenna sensors when the antenna sensors in an antenna array operate simultaneously. This phenomenon is called mutual coupling effect (MCE). It has been shown that the performance of antenna array systems can be degraded when the antenna sensors are in close proximity. Especially, in a systems equipped with massive antenna sensors, the degradation of beamforming performance due to the MCE is significantly inevitable. Moreover, it has been shown that even a small angle error between the true direction angle of the desired signal and the steering angle deteriorates the effectiveness of an array beamforming system. However, the true direction vector of the desired signal may not be exactly known in some applications, e.g., the application in land mobile-cellular wireless systems. Therefore, it is worth developing robust techniques to deal with the problem due to the MCE and steering angle error for array beamforming systems. In this paper, we present an efficient technique for performing adaptive beamforming with robust capabilities against the MCE and the steering angle error. Only the data vector received by an antenna array is required by the proposed technique. By using the received array data vector, a correlation matrix is constructed to replace the original correlation matrix associated with the received array data vector. Then, the mutual coupling matrix due to the MCE on the antenna array is estimated through a recursive algorithm. An appropriate estimate of the direction angle of the desired signal can also be obtained during the recursive process. Based on the estimated mutual coupling matrix, the estimated direction angle, and the reconstructed correlation matrix, the proposed technique can effectively cure the performance degradation due to steering angle error and MCE. The novelty of the proposed technique is that the implementation procedure is very simple and the resulting adaptive beamforming performance is satisfactory. Simulation results show that the proposed technique provides much better beamforming performance without requiring complicated complexity as compared with the existing robust techniques.Keywords: adaptive beamforming, mutual coupling effect, recursive algorithm, steering angle error
Procedia PDF Downloads 3207176 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
Authors: Jorge Gonzalez Camus, Carlos Lizama
Abstract:
In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations
Procedia PDF Downloads 1547175 Testing of Electronic Control Unit Communication Interface
Authors: Petr Šimek, Kamil Kostruk
Abstract:
This paper deals with the problem of testing the Electronic Control Unit (ECU) for the specified function validation. Modern ECUs have many functions which need to be tested. This process requires tracking between the test and the specification. The technique discussed in this paper explores the system for automating this process. The paper focuses in its chapter IV on the introduction to the problem in general, then it describes the proposed test system concept and its principle. It looks at how the process of the ECU interface specification file for automated interface testing and test tracking works. In the end, the future possible development of the project is discussed.Keywords: electronic control unit testing, embedded system, test generate, test automation, process automation, CAN bus, ethernet
Procedia PDF Downloads 1117174 Comparative Analysis of Classification Methods in Determining Non-Active Student Characteristics in Indonesia Open University
Authors: Dewi Juliah Ratnaningsih, Imas Sukaesih Sitanggang
Abstract:
Classification is one of data mining techniques that aims to discover a model from training data that distinguishes records into the appropriate category or class. Data mining classification methods can be applied in education, for example, to determine the classification of non-active students in Indonesia Open University. This paper presents a comparison of three methods of classification: Naïve Bayes, Bagging, and C.45. The criteria used to evaluate the performance of three methods of classification are stratified cross-validation, confusion matrix, the value of the area under the ROC Curve (AUC), Recall, Precision, and F-measure. The data used for this paper are from the non-active Indonesia Open University students in registration period of 2004.1 to 2012.2. Target analysis requires that non-active students were divided into 3 groups: C1, C2, and C3. Data analyzed are as many as 4173 students. Results of the study show: (1) Bagging method gave a high degree of classification accuracy than Naïve Bayes and C.45, (2) the Bagging classification accuracy rate is 82.99 %, while the Naïve Bayes and C.45 are 80.04 % and 82.74 % respectively, (3) the result of Bagging classification tree method has a large number of nodes, so it is quite difficult in decision making, (4) classification of non-active Indonesia Open University student characteristics uses algorithms C.45, (5) based on the algorithm C.45, there are 5 interesting rules which can describe the characteristics of non-active Indonesia Open University students.Keywords: comparative analysis, data mining, clasiffication, Bagging, Naïve Bayes, C.45, non-active students, Indonesia Open University
Procedia PDF Downloads 3147173 Finite Element Approximation of the Heat Equation under Axisymmetry Assumption
Authors: Raphael Zanella
Abstract:
This works deals with the finite element approximation of axisymmetric problems. The weak formulation of the heat equation under the axisymmetry assumption is established for continuous finite elements. The weak formulation is implemented in a C++ solver with implicit march-in-time. The code is verified by space and time convergence tests using a manufactured solution. The solving of an example problem with an axisymmetric formulation is compared to that with a full-3D formulation. Both formulations lead to the same result, but the code based on the axisymmetric formulation is much faster due to the lower number of degrees of freedom. This confirms the correctness of our approach and the interest in using an axisymmetric formulation when it is possible.Keywords: axisymmetric problem, continuous finite elements, heat equation, weak formulation
Procedia PDF Downloads 2007172 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band
Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman
Abstract:
In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite
Procedia PDF Downloads 2347171 Theoretical and Experimental Investigation of the Interaction Behavior of a Bouncing Ball upon a Flexible Surface Impacted in Two Dimensions
Authors: Wiwat Chumai, Perawit Boonsomchua, Kanjana Ongkasin
Abstract:
The ball bouncing problem is a well-known problem in physics involving a ball dropped from a height to the ground. In this paper, the work investigates the theoretical and experimental setup that describes the dynamics of a rigid body on a chaotic elastic surface under air-damp conditions. Examination of four different types of balls is made, including marble, metal ball, tennis ball, and ping-pong ball. In this experiment, the effect of impact velocities is not considered; the ball is dropped from a fixed height. The method in this work employs the Rayleigh Dissipation Function to specify the effects of dissipative forces in Lagrangian mechanics. Our discoveries reveal that the dynamics of the ball exhibit horizontal motion while damping oscillation occurs, forming the destabilization in vertical pinch-off motion. Moreover, rotational motion is studied. According to the investigation of four different balls, the outcomes illustrate that greater mass results in more frequent dynamics, and the experimental results at some points align with the theoretical model. This knowledge contributes to our understanding of the complex fluid system and could serve as a foundation for further developments in water droplet simulation.Keywords: droplet, damping oscillation, nonlinear damping oscillation, bouncing ball problem, elastic surface
Procedia PDF Downloads 987170 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems
Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer
Abstract:
This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control
Procedia PDF Downloads 1527169 Facile Synthesis of Metal Nanoparticles on Graphene via Galvanic Displacement Reaction for Sensing Application
Authors: Juree Hong, Sanggeun Lee, Jungmok Seo, Taeyoon Lee
Abstract:
We report a facile synthesis of metal nano particles (NPs) on graphene layer via galvanic displacement reaction between graphene-buffered copper (Cu) and metal ion-containing salts. Diverse metal NPs can be formed on graphene surface and their morphologies can be tailored by controlling the concentration of metal ion-containing salt and immersion time. The obtained metal NP-decorated single-layer graphene (SLG) has been used as hydrogen gas (H2) sensing material and exhibited highly sensitive response upon exposure to 2% of H2.Keywords: metal nanoparticle, galvanic displacement reaction, graphene, hydrogen sensor
Procedia PDF Downloads 4227168 Numerical and Analytical Approach for Film Condensation on Different Forms of Surfaces
Authors: A. Kazemi Jouybari, A. Mirabdolah Lavasani
Abstract:
This paper seeks to the solution of condensation around of a flat plate, circular and elliptical tube in way of numerical and analytical methods. Also, it calculates the entropy production rates. The first, problem was solved by using mesh dynamic and rational assumptions, next it was compared with the numerical solution that the result had acceptable errors. An additional supporting relation was applied based on a characteristic of condensation phenomenon for condensing elements. As it has been shown here, due to higher rates of heat transfer for elliptical tubes, they have more entropy production rates, in comparison to circular ones. Findings showed that two methods were efficient. Furthermore, analytical methods can be used to optimize the problem and reduce the entropy production rate.Keywords: condensation, numerical solution, analytical solution, entropy rate
Procedia PDF Downloads 2147167 Thermoelectric Blanket for Aiding the Treatment of Cerebral Hypoxia and Other Related Conditions
Authors: Sarayu Vanga, Jorge Galeano-Cabral, Kaya Wei
Abstract:
Cerebral hypoxia refers to a condition in which there is a decrease in oxygen supply to the brain. Patients suffering from this condition experience a decrease in their body temperature. While there isn't any cure to treat cerebral hypoxia as of date, certain procedures are utilized to help aid in the treatment of the condition. Regulating the body temperature is an example of one of those procedures. Hypoxia is well known to reduce the body temperature of mammals, although the neural origins of this response remain uncertain. In order to speed recovery from this condition, it is necessary to maintain a stable body temperature. In this study, we present an approach to regulating body temperature for patients who suffer from cerebral hypoxia or other similar conditions. After a thorough literature study, we propose the use of thermoelectric blankets, which are temperature-controlled thermal blankets based on thermoelectric devices. These blankets are capable of heating up and cooling down the patient to stabilize body temperature. This feature is possible through the reversible effect that thermoelectric devices offer while behaving as a thermal sensor, and it is an effective way to stabilize temperature. Thermoelectricity is the direct conversion of thermal to electrical energy and vice versa. This effect is now known as the Seebeck effect, and it is characterized by the Seebeck coefficient. In such a configuration, the device has cooling and heating sides with temperatures that can be interchanged by simply switching the direction of the current input in the system. This design integrates various aspects, including a humidifier, ventilation machine, IV-administered medication, air conditioning, circulation device, and a body temperature regulation system. The proposed design includes thermocouples that will trigger the blanket to increase or decrease a set temperature through a medical temperature sensor. Additionally, the proposed design allows an efficient way to control fluctuations in body temperature while being cost-friendly, with an expected cost of 150 dollars. We are currently working on developing a prototype of the design to collect thermal and electrical data under different conditions and also intend to perform an optimization analysis to improve the design even further. While this proposal was developed for treating cerebral hypoxia, it can also aid in the treatment of other related conditions, as fluctuations in body temperature appear to be a common symptom that patients have for many illnesses.Keywords: body temperature regulation, cerebral hypoxia, thermoelectric, blanket design
Procedia PDF Downloads 1547166 Modeling a Closed Loop Supply Chain with Continuous Price Decrease and Dynamic Deterministic Demand
Authors: H. R. Kamali, A. Sadegheih, M. A. Vahdat-Zad, H. Khademi-Zare
Abstract:
In this paper, a single product, multi-echelon, multi-period closed loop supply chain is surveyed, including a variety of costs, time conditions, and capacities, to plan and determine the values and time of the components procurement, production, distribution, recycling and disposal specially for high-tech products that undergo a decreasing production cost and sale price over time. For this purpose, the mathematic model of the problem that is a kind of mixed integer linear programming is presented, and it is finally proved that the problem belongs to the category of NP-hard problems.Keywords: closed loop supply chain, continuous price decrease, NP-hard, planning
Procedia PDF Downloads 3627165 Clinical and Sleep Features in an Australian Population Diagnosed with Mild Cognitive Impairment
Authors: Sadie Khorramnia, Asha Bonney, Kate Galloway, Andrew Kyoong
Abstract:
Sleep plays a pivotal role in the registration and consolidation of memory. Multiple observational studies have demonstrated that self-reported sleep duration and sleep quality are associated with cognitive performance. Montreal Cognitive Assessment questionnaire is a screening tool to assess mild cognitive (MCI) impairment with a 90% diagnostic sensitivity. In our current study, we used MOCA to identify MCI in patients who underwent sleep study in our sleep department. We then looked at the clinical risk factors and sleep-related parameters in subjects found to have mild cognitive impairment but without a diagnosis of sleep-disordered breathing. Clinical risk factors, including physician, diagnosed hypertension, diabetes, and depression and sleep-related parameters, measured during sleep study, including percentage time of each sleep stage, total sleep time, awakenings, sleep efficiency, apnoea hypopnoea index, and oxygen saturation, were evaluated. A total of 90 subjects who underwent sleep study between March 2019 and October 2019 were included. Currently, there is no pharmacotherapy available for MCI; therefore, identifying the risk factors and attempting to reverse or mitigate their effect is pivotal in slowing down the rate of cognitive deterioration. Further characterization of sleep parameters in this group of patients could open up opportunities for potentially beneficial interventions.Keywords: apnoea hypopnea index, mild cognitive impairment, sleep architecture, sleep study
Procedia PDF Downloads 1437164 A Qualitative Study on Metacognitive Patterns among High and Low Performance Problem Based on Learning Groups
Authors: Zuhairah Abdul Hadi, Mohd Nazir bin Md. Zabit, Zuriadah Ismail
Abstract:
Metacognitive has been empirically evidenced to be one important element influencing learning outcomes. Expert learners engage in metacognition by monitoring and controlling their thinking, and listing, considering and selecting the best strategies to achieve desired goals. Studies also found that good critical thinkers engage in more metacognition and people tend to activate more metacognition when solving complex problems. This study extends past studies by performing a qualitative analysis to understand metacognitive patterns among two high and two low performing groups by carefully examining video and audio records taken during Problem-based learning activities. High performing groups are groups with majority members scored well in Watson Glaser II Critical Thinking Appraisal (WGCTA II) and academic achievement tests. Low performing groups are groups with majority members fail to perform in the two tests. Audio records are transcribed and analyzed using schemas adopted from past studies. Metacognitive statements are analyzed using three stages model and patterns of metacognitive are described by contexts, components, and levels for each high and low performing groups.Keywords: academic achievement, critical thinking, metacognitive, problem-based learning
Procedia PDF Downloads 2827163 Social Entrepreneurship on Islamic Perspective: Identifying Research Gap
Authors: Mohd Adib Abd Muin, Shuhairimi Abdullah, Azizan Bahari
Abstract:
Problem: The research problem is lacking of model on social entrepreneurship that focus on Islamic perspective. Objective: The objective of this paper is to analyse the existing model on social entrepreneurship and to identify the research gap on Islamic perspective from existing models. Research Methodology: The research method used in this study is literature review and comparative analysis from 6 existing models of social entrepreneurship. Finding: The research finding shows that 6 existing models on social entrepreneurship has been analysed and it shows that the existing models on social entrepreneurship do not emphasize on Islamic perspective.Keywords: social entrepreneurship, Islamic perspective, research gap, business management
Procedia PDF Downloads 3547162 Digital Holographic Interferometric Microscopy for the Testing of Micro-Optics
Authors: Varun Kumar, Chandra Shakher
Abstract:
Micro-optical components such as microlenses and microlens array have numerous engineering and industrial applications for collimation of laser diodes, imaging devices for sensor system (CCD/CMOS, document copier machines etc.), for making beam homogeneous for high power lasers, a critical component in Shack-Hartmann sensor, fiber optic coupling and optical switching in communication technology. Also micro-optical components have become an alternative for applications where miniaturization, reduction of alignment and packaging cost are necessary. The compliance with high-quality standards in the manufacturing of micro-optical components is a precondition to be compatible on worldwide markets. Therefore, high demands are put on quality assurance. For quality assurance of these lenses, an economical measurement technique is needed. For cost and time reason, technique should be fast, simple (for production reason), and robust with high resolution. The technique should provide non contact, non-invasive and full field information about the shape of micro- optical component under test. The interferometric techniques are noncontact type and non invasive and provide full field information about the shape of the optical components. The conventional interferometric technique such as holographic interferometry or Mach-Zehnder interferometry is available for characterization of micro-lenses. However, these techniques need more experimental efforts and are also time consuming. Digital holography (DH) overcomes the above described problems. Digital holographic microscopy (DHM) allows one to extract both the amplitude and phase information of a wavefront transmitted through the transparent object (microlens or microlens array) from a single recorded digital hologram by using numerical methods. Also one can reconstruct the complex object wavefront at different depths due to numerical reconstruction. Digital holography provides axial resolution in nanometer range while lateral resolution is limited by diffraction and the size of the sensor. In this paper, Mach-Zehnder based digital holographic interferometric microscope (DHIM) system is used for the testing of transparent microlenses. The advantage of using the DHIM is that the distortions due to aberrations in the optical system are avoided by the interferometric comparison of reconstructed phase with and without the object (microlens array). In the experiment, first a digital hologram is recorded in the absence of sample (microlens array) as a reference hologram. Second hologram is recorded in the presence of microlens array. The presence of transparent microlens array will induce a phase change in the transmitted laser light. Complex amplitude of object wavefront in presence and absence of microlens array is reconstructed by using Fresnel reconstruction method. From the reconstructed complex amplitude, one can evaluate the phase of object wave in presence and absence of microlens array. Phase difference between the two states of object wave will provide the information about the optical path length change due to the shape of the microlens. By the knowledge of the value of the refractive index of microlens array material and air, the surface profile of microlens array is evaluated. The Sag of microlens and radius of curvature of microlens are evaluated and reported. The sag of microlens agrees well within the experimental limit as provided in the specification by the manufacturer.Keywords: micro-optics, microlens array, phase map, digital holographic interferometric microscopy
Procedia PDF Downloads 497