Search results for: time intervals
17550 Real-Time Land Use and Land Information System in Homagama Divisional Secretariat Division
Authors: Kumara Jayapathma J. H. M. S. S., Dampegama S. D. P. J.
Abstract:
Lands are valuable & limited resource which constantly changes with the growth of the population. An efficient and good land management system is essential to avoid conflicts associated with lands. This paper aims to design the prototype model of a Mobile GIS Land use and Land Information System in real-time. Homagama Divisional Secretariat Division situated in the western province of Sri Lanka was selected as the study area. The prototype model was developed after reviewing related literature. The methodology was consisted of designing and modeling the prototype model into an application running on a mobile platform. The system architecture mainly consists of a Google mapping app for real-time updates with firebase support tools. Thereby, the method of implementation consists of front-end and back-end components. Software tools used in designing applications are Android Studio with JAVA based on GeoJSON File structure. Android Studio with JAVA in GeoJSON File Synchronize to Firebase was found to be the perfect mobile solution for continuously updating Land use and Land Information System (LIS) in real-time in the present scenario. The mobile-based land use and LIS developed in this study are multiple user applications catering to different hierarchy levels such as basic users, supervisory managers, and database administrators. The benefits of this mobile mapping application will help public sector field officers with non-GIS expertise to overcome the land use planning challenges with land use updated in real-time.Keywords: Android, Firebase, GeoJSON, GIS, JAVA, JSON, LIS, Mobile GIS, real-time, REST API
Procedia PDF Downloads 23117549 Modeling the Time-Dependent Rheological Behavior of Clays Used in Fabrication of Ceramic
Authors: Larbi Hammadi, N. Boudjenane, N. Benhallou, R. Houjedje, R. Reffis, M. Belhadri
Abstract:
Many of clays exhibited the thixotropic behavior in which, the apparent viscosity of material decreases with time of shearing at constant shear rate. The structural kinetic model (SKM) was used to characterize the thixotropic behavior of two different kinds of clays used in fabrication of ceramic. Clays selected for analysis represent the fluid and semisolid clays materials. The SKM postulates that the change in the rheological behavior is associated with shear-induced breakdown of the internal structure of the clays. This model for the structure decay with time at constant shear rate assumes nth order kinetics for the decay of the material structure with a rate constant.Keywords: ceramic, clays, structural kinetic model, thixotropy, viscosity
Procedia PDF Downloads 41017548 Perovskite Solar Cells Penetration on Electric Grids Based on the Power Hardware in the Loop Methodology
Authors: Alaa A. Zaky, Bandar Alfaifi, Saleh Alyahya, Alkistis Kontou, Panos Kotsampopoulos
Abstract:
In this work, we present for the first time the grid-integration of 3rd generation perovskite solar cells (PSCs) based on nanotechnology in fabrication. The effect of this penetration is analyzed in normal, fault and islanding cases of operation under different irradiation conditions using the power hardware in the loop (PHIL) methodology. The PHL method allows the PSCs connection to the electric grid which is simulated in the real-time digital simulator (RTDS), for laboratory validation of the PSCs behavior under conditions very close to real.Keywords: perovskite solar cells, power hardware in the loop, real-time digital simulator, smart grid
Procedia PDF Downloads 3017547 Software Reliability Prediction Model Analysis
Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria
Abstract:
Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability
Procedia PDF Downloads 46517546 Internet Media and Public: A Report of a Mutual Deception
Authors: Safet Zejnullahu
Abstract:
The relationship between the public and media is more than meaningful. It has been a topic of discussion as early as the birth of the media. The 'magic box' called radio adapted and transformed by following the tastes and interests of the public. Television went a step further by complementing the magic sound of the magic box with photos/images. Newspapers informed the reader, but from time to time, they also provided them the room to express their opinions. The media-public report in the traditional media is a report of a mutual respect. Today, the report between media and public should be well defined. The goal of this paper is to analyze the history of the media-public relationship, with a special emphasis on the analysis of this relationship in media of the internet time. This paper seeks to prove that the internet media has created a completely new and thus far unknown relationship between the media and public. Through research, which includes an analysis of the media in Kosovo and Albania, it will be proven that the media of the internet time has elevated this relationship to a new level, with many unknowns in terms of the functioning and role of the media. The results and findings of the paper are related to the conclusion that from the relationship in which the roles of the media and the public are known, nowadays, this relationship goes beyond the known principle and rules and is more defined as a relationship of mutual deception. The media goes beyond the line of the humility of the public, and the public seeks to direct the content of the media. The media-public report in the internet-media is a report based on mutual attempt for fraud.Keywords: media, public, report, humility, direction
Procedia PDF Downloads 17817545 Stereo Motion Tracking
Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox
Procedia PDF Downloads 32717544 Investigating the Vehicle-Bicyclists Conflicts using LIDAR Sensor Technology at Signalized Intersections
Authors: Alireza Ansariyar, Mansoureh Jeihani
Abstract:
Light Detection and Ranging (LiDAR) sensors are capable of recording traffic data including the number of passing vehicles and bicyclists, the speed of vehicles and bicyclists, and the number of conflicts among both road users. In order to collect real-time traffic data and investigate the safety of different road users, a LiDAR sensor was installed at Cold Spring Ln – Hillen Rd intersection in Baltimore City. The frequency and severity of collected real-time conflicts were analyzed and the results highlighted that 122 conflicts were recorded over a 10-month time interval from May 2022 to February 2023. By using an innovative image-processing algorithm, a new safety Measure of Effectiveness (MOE) was proposed to recognize the critical zones for bicyclists entering each zone. Considering the trajectory of conflicts, the results of the analysis demonstrated that conflicts in the northern approach (zone N) are more frequent and severe. Additionally, sunny weather is more likely to cause severe vehicle-bike conflicts.Keywords: LiDAR sensor, post encroachment time threshold (PET), vehicle-bike conflicts, a measure of effectiveness (MOE), weather condition
Procedia PDF Downloads 23817543 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 25117542 A Multilevel Approach of Reproductive Preferences and Subsequent Behavior in India
Authors: Anjali Bansal
Abstract:
Reproductive preferences mainly deal with two questions: when a couple wants children and how many they want. Questions related to these desires are often included in the fertility surveys as they can provide relevant information on the subsequent behavior. The aim of the study is to observe whether respondent’s response to these questions changed over time or not. We also tried to identify socio- economic and demographic factors associated with the stability (or instability) of fertility preferences. For this purpose, we used IHDS1 (2004-05) and follow up survey IHDS2 (2011-12) data and applied bivariate, multivariate and multilevel repeated measure analysis to it to find the consistency between responses. From the analysis, we found that preferences of women changes over the course of time as from the bivariate analysis we have found that 52% of women are not consistent in their desired family size and huge inconsistency are found in desire to continue childbearing. To get a better overlook of these inconsistencies, we have computed Intra Class Correlation (ICC) which tries to explain the consistency between individuals on their fertility responses at two time periods. We also explored that husband’s desire for additional child specifically male offspring contribute to these variations. Our findings lead us to a cessation that in India, individuals fertility preferences changed over a seven-year time period as the Intra Class correlation comes out to be very small which explains the variations among individuals. Concerted efforts should be made, therefore, to educate people, and conduct motivational programs to promote family planning for family welfare.Keywords: change, consistency, preferences, over time
Procedia PDF Downloads 16717541 Working Women and Leave in India
Authors: Ankita Verma
Abstract:
Women transform the group of people into a family and a house into a home. When a woman embraces motherhood, she undergoes several stresses – both physical and mental. Therefore, to be supportive of women during this critical stage is a societal responsibility. India is in the league of many developed nations in formulating women-friendly policies. One such initiative is the Maternity Benefits Act; first passed in 1961 and later amended from time to time with the latest amended Act of 2017. This review paper critically analyzes provisions of the Act, its implementation, and the legal issues arising out of implementation of the Act. The review suggests that the Act has made a positive impact and the judiciary also has played its role in streamlining the process of implementation of the Act. However, at the same time, it is also felt that employers often hesitate in hiring a mother or an expectant mother.Keywords: maternity benefits, maternity benefits act 1961 & 2017, motherhood, maternity and paternity leave, medical bonus, work environment
Procedia PDF Downloads 17217540 Effect of Fast and Slow Tempo Music on Muscle Endurance Time
Authors: Rohit Kamal, Devaki Perumal Rajaram, Rajam Krishna, Sai Kumar Pindagiri, Silas Danielraj
Abstract:
Introduction: According to WHO, Global health observatory at least 2.8 million people die each year because of obesity and overweight. This is mainly because of the adverse metabolic effects of obesity and overweight on blood pressure, lipid profile especially cholesterol and insulin resistance. To achieve optimum health WHO has set the BMI in the range of 18.5 to 24.9 kg/m2. Due to modernization of life style, physical exercise in the form of work is no longer a possibility and hence an effective way to burn out calories to achieve the optimum BMI is the need of the hour. Studies have shown that exercising for more than 60 minutes /day helps to maintain the weight and to reduce the weight exercise should be done for 90 minutes a day. Moderate exercise for about 30 min is essential for burning up of calories. People with low endurance fail to perform even the low intensity exercise for minimal time. Hence, it is necessary to find out some effective method to increase the endurance time. Methodology: This study was approved by the Institutional Ethical committee of our college. After getting written informed consent, 25 apparently healthy males between the age group 18-20 years were selected. Subjects are with muscular disorder, subjects who are Hypertensive, Diabetes, Smokers, Alcoholics, taking drugs affecting the muscle strength. To determine the endurance time: Maximum voluntary contraction (MVC) was measured by asking the participants to squeeze the hand grip dynamometer as hard as possible and hold it for 3 seconds. This procedure was repeated thrice and the average of the three reading was taken as the maximum voluntary contraction. The participant was then asked to squeeze the dynamometer and hold it at 70% of the maximum voluntary contraction while hearing fast tempo music which was played for about ten minutes then the participant was asked to relax for ten minutes and was made to hold the hand grip dynamometer at 70% of the maximum voluntary contraction while hearing slow tempo music. To avoid the bias of getting habituated to the procedure the order of hearing for the fast and slow tempo music was changed. The time for which they can hold it at 70% of MVC was determined by using a stop watch and that was taken as the endurance time. Results: The mean value of the endurance time during fast and slow tempo music was compared in all the subjects. The mean MVC was 34.92 N. The mean endurance time was 21.8 (16.3) seconds with slow tempo music which was more then with fast tempo music with which the mean endurance time was 20.6 (11.7) seconds. The preference was more for slow tempo music then for fast tempo music. Conclusion: Music when played during exercise by some unknown mechanism helps to increase the endurance time by alleviating the symptoms of lactic acid accumulation.Keywords: endurance time, fast tempo music, maximum voluntary contraction, slow tempo music
Procedia PDF Downloads 30317539 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 10117538 Seismic Analysis of Adjacent Buildings Connected with Dampers
Authors: Devyani D. Samarth, Sachin V. Bakre, Ratnesh Kumar
Abstract:
This work deals with two buildings adjacent to each other connected with dampers. The “Imperial Valley Earthquake - El Centro", "May 18, 1940 earthquake time history is used for dynamic analysis of the system in the time domain. The effectiveness of fluid joint dampers is then investigated in terms of the reduction of displacement, acceleration and base shear responses of adjacent buildings. Finally, an extensive parametric study is carried out to find optimum damper properties like stiffness (Kd) and damping coefficient (Cd) for adjacent buildings. Results show that using fluid dampers to connect the adjacent buildings of different fundamental frequencies can effectively reduce earthquake-induced responses of either building if damper optimum properties are selected.Keywords: energy dissipation devices, time history analysis, viscous damper, optimum parameters
Procedia PDF Downloads 49317537 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
Abstract:
A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA
Procedia PDF Downloads 25317536 PVMODREL© Development Based on Reliability Evaluation of a PV Module Using Accelerated Degradation Testing
Authors: Abderafi Charki, David Bigaud
Abstract:
The aim of this oral speach is to present the PVMODREL© (PhotoVoltaic MODule RELiability) new software developed in the University of Angers. This new tool permits us to evaluate the lifetime and reliability of a PV module whatever its geographical location and environmental conditions. The electrical power output of a PV module decreases with time mainly as a result of the effects of corrosion, encapsulation discoloration, and solder bond failure. The failure of a PV module is defined as the point where the electrical power degradation reaches a given threshold value. Accelerated life tests (ALTs) are commonly used to assess the reliability of a PV module. However, ALTs provide limited data on the failure of a module and these tests are expensive to carry out. One possible solution is to conduct accelerated degradation tests. The Wiener process in conjunction with the accelerated failure time model makes it possible to carry out numerous simulations and thus to determine the failure time distribution based on the aforementioned threshold value. By this means, the failure time distribution and the lifetime (mean and uncertainty) can be evaluated. An example using the damp heat test is shown to demonstrate the usefulness PVMODREL.Keywords: lifetime, reliability, PV Module, accelerated life testing, accelerated degradation testing
Procedia PDF Downloads 57517535 Analyzing the Empirical Link between Islamic Finance and Growth of Real Output: A Time Series Application to Pakistan
Authors: Nazima Ellahi, Danish Ramzan
Abstract:
There is a growing trend among development economists regarding the importance of financial sector for economic development and growth activities. The development thus introduced, helps to promote welfare effects and poverty alleviation. This study is an attempt to find the nature of link between Islamic banking financing and development of output growth for Pakistan. Time series data set has been utilized for a time period ranging from 1990 to 2010. Following the Phillip Perron (PP) and Augmented Dicky Fuller (ADF) test of unit root this study applied Ordinary Least Squares (OLS) method of estimation and found encouraging results in favor of promoting the Islamic banking practices in Pakistan.Keywords: Islamic finance, poverty alleviation, economic growth, finance, commerce
Procedia PDF Downloads 34617534 Clustering Color Space, Time Interest Points for Moving Objects
Authors: Insaf Bellamine, Hamid Tairi
Abstract:
Detecting moving objects in sequences is an essential step for video analysis. This paper mainly contributes to the Color Space-Time Interest Points (CSTIP) extraction and detection. We propose a new method for detection of moving objects. Two main steps compose the proposed method. First, we suggest to apply the algorithm of the detection of Color Space-Time Interest Points (CSTIP) on both components of the Color Structure-Texture Image Decomposition which is based on a Partial Differential Equation (PDE): a color geometric structure component and a color texture component. A descriptor is associated to each of these points. In a second stage, we address the problem of grouping the points (CSTIP) into clusters. Experiments and comparison to other motion detection methods on challenging sequences show the performance of the proposed method and its utility for video analysis. Experimental results are obtained from very different types of videos, namely sport videos and animation movies.Keywords: Color Space-Time Interest Points (CSTIP), Color Structure-Texture Image Decomposition, Motion Detection, clustering
Procedia PDF Downloads 37917533 Application of Global Predictive Real Time Control Strategy to Improve Flooding Prevention Performance of Urban Stormwater Basins
Authors: Shadab Shishegar, Sophie Duchesne, Genevieve Pelletier
Abstract:
Sustainability as one of the key elements of Smart cities, can be realized by employing Real Time Control Strategies for city’s infrastructures. Nowadays Stormwater management systems play an important role in mitigating the impacts of urbanization on natural hydrological cycle. These systems can be managed in such a way that they meet the smart cities standards. In fact, there is a huge potential for sustainable management of urban stormwater and also its adaptability to global challenges like climate change. Hence, a dynamically managed system that can adapt itself to instability of the environmental conditions is desirable. A Global Predictive Real Time Control approach is proposed in this paper to optimize the performance of stormwater management basins in terms of flooding prevention. To do so, a mathematical optimization model is developed then solved using Genetic Algorithm (GA). Results show an improved performance at system-level for the stormwater basins in comparison to static strategy.Keywords: environmental sustainability, optimization, real time control, storm water management
Procedia PDF Downloads 17917532 Queueing Modeling of M/G/1 Fault Tolerant System with Threshold Recovery and Imperfect Coverage
Authors: Madhu Jain, Rakesh Kumar Meena
Abstract:
This paper investigates a finite M/G/1 fault tolerant multi-component machining system. The system incorporates the features such as standby support, threshold recovery and imperfect coverage make the study closer to real time systems. The performance prediction of M/G/1 fault tolerant system is carried out using recursive approach by treating remaining service time as a supplementary variable. The numerical results are presented to illustrate the computational tractability of analytical results by taking three different service time distributions viz. exponential, 3-stage Erlang and deterministic. Moreover, the cost function is constructed to determine the optimal choice of system descriptors to upgrading the system.Keywords: fault tolerant, machine repair, threshold recovery policy, imperfect coverage, supplementary variable technique
Procedia PDF Downloads 29217531 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang
Abstract:
Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing
Procedia PDF Downloads 7217530 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin
Procedia PDF Downloads 12217529 Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with Evaluation on a Ground Truth
Authors: Hatem Hajri, Mohamed-Cherif Rahal
Abstract:
Both Lidars and Radars are sensors for obstacle detection. While Lidars are very accurate on obstacles positions and less accurate on their velocities, Radars are more precise on obstacles velocities and less precise on their positions. Sensor fusion between Lidar and Radar aims at improving obstacle detection using advantages of the two sensors. The present paper proposes a real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN). This algorithm is implemented and embedded in an automative vehicle as a component generated by a real-time multisensor software. The benefits of data fusion comparing with the use of a single sensor are illustrated through several tracking scenarios (on a highway and on a bend) and using real-time kinematic sensors mounted on the ego and tracked vehicles as a ground truth.Keywords: ground truth, Hungarian algorithm, lidar Radar data fusion, global nearest neighbor filter
Procedia PDF Downloads 17417528 Effect of Feed Supplement Optipartum C+ 200 (Alfa- Amylase and Beta-Glucanase) in In-Line Rumination Parameters
Authors: Ramūnas Antanaitis, Lina Anskienė, Robertas Stoškus
Abstract:
This study was conducted during 2021.05.01 – 2021.08.31 at the Lithuanian University of health sciences and one Lithuanian dairy farm with 500 dairy cows (55.911381565736, 21.881321760608195). Average calving – 50 cows per month. Cows (n=20) in the treatment group (TG) were fed with feed supplement Optipartum C+ 200 (Enzymes: Alfa- Amylase 57 Units; Beta-Glucanase 107 Units) from 21 days before calving till 30 days after calving with feeding rate 200g/cow/day. Cows in the control group (CG) were fed a feed ration without feed supplement. Measurements started from 6 days before calving and continued till 21 days after calving. The following indicators were registered: with the RumiWatch System: Rumination time; Eating time; Drinking time; Rumination chews; Eating chews; Drinking gulps; Bolus; Chews per minute; Chews per bolus. With SmaXtec system - the temperature, pH of the contents of cows' reticulorumens and cows' activity. According to our results, we found that feeding of cows, from 21 days before calving to 30 days after calving, with a feed supplement with alfa- amylase and beta-glucanase (Optipartum C+ 200) (with dose 200g/cow/day) can produce an increase in: 9% rumination time and eating time, 19% drinking time, 11% rumination chews, 16% eating chews,13% number of boluses per rumination, 5% chews per minute and 16% chews per bolus. We found 1.28 % lower reiticulorumen pH and 0.64% lower reticulorumen temperature in cows fed with the supplement compared with control group cows. Also, cows feeding with enzymes were 8.80% more active.Keywords: Alfa-Amylase, Beta-Glucanase, cows, in-line, sensors
Procedia PDF Downloads 32617527 Theoretical Appraisal of Satisfactory Decision: Uncertainty, Evolutionary Ideas and Beliefs, Satisfactory Time Use
Authors: Okay Gunes
Abstract:
Unsatisfactory experiences due to an information shortage regarding the future pay-offs of actual choices, yield satisficing decision-making. This research will examine, for the first time in the literature, the motivation behind suboptimal decisions due to uncertainty by subjecting Adam Smith’s and Jeremy Bentham’s assumptions about the nature of the actions that lead to satisficing behavior, in order to clarify the theoretical background of a “consumption-based satisfactory time” concept. The contribution of this paper with respect to the existing literature is threefold: Firstly, it is showed in this paper that Adam Smith’s uncertainty is related to the problem of the constancy of ideas and not related directly to beliefs. Secondly, possessions, as in Jeremy Bentham’s oeuvre, are assumed to be just as pleasing, as protecting and improving the actual or expected quality of life, so long as they reduce any displeasure due to the undesired outcomes of uncertainty. Finally, each consumption decision incurs its own satisfactory time period, owed to not feeling hungry, being healthy, not having transportation…etc. This reveals that the level of satisfaction is indeed a behavioral phenomenon where its value would depend on the simultaneous satisfaction derived from all activities.Keywords: decision-making, idea and belief, satisficing, uncertainty
Procedia PDF Downloads 28717526 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 9717525 Part Performance Improvement through Design Optimisation of Cooling Channels in the Injection Moulding Process
Authors: M. A. Alhubail, A. I. Alateyah, D. Alenezi, B. Aldousiri
Abstract:
In this study conformal cooling channel (CCC) was employed to dissipate heat of, Polypropylene (PP) parts injected into the Stereolithography (SLA) insert to form tensile and flexural test specimens. The direct metal laser sintering (DMLS) process was used to fabricate a mould with optimised CCC, while optimum parameters of injection moulding were obtained using Optimal-D. The obtained results show that optimisation of the cooling channel layout using a DMLS mould has significantly shortened cycle time without sacrificing the part’s mechanical properties. By applying conformal cooling channels, the cooling time phase was reduced by 20 seconds, and also defected parts were eliminated.Keywords: optimum parameters, injection moulding, conformal cooling channels, cycle time
Procedia PDF Downloads 22817524 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory
Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov
Abstract:
The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.Keywords: analytical regularization method, electromagnetic theory evolutionary equations of time-domain, TM Field
Procedia PDF Downloads 50117523 Screening of Different Native Genotypes of Broadleaf Mustard against Different Diseases
Authors: Nisha Thapa, Ram Prasad Mainali, Prakriti Chand
Abstract:
Broadleaf mustard is a commercialized leafy vegetable of Nepal. However, its utilization is hindered in terms of production and productivity due to the high intensity of insects, pests, and diseases causing great loss. The plant protection part of the crop’s disease and damage intensity has not been studied much from research perspectives in Nepal. The research aimed to evaluate broadleaf mustard genotypes for resistance against different diseases. A total of 35 native genotypes of broadleaf mustard were screened at weekly intervals by scoring the plants for ten weeks. Five different diseases, such as Rhizoctonia root rot, Alternaria blight, black rot, turnip mosaic virus disease, and white rust, were reported from the broad leaf mustard genotypes. Out of 35 genotypes, 23 genotypes were found with very high Rhizoctonia Root Rot severity, whereas 8 genotypes showed very high Alternaria blight severity. Likewise, 3 genotypes were found with high Black rot severity, and 1 genotype was found with very high Turnip mosaic virus disease incidence. Similarly, 2 genotypes were found to have very high White rust severity. Among the disease of national importance, Rhizoctonia root rot was found to be the most severe disease with the greatest loss. Broadleaf mustard genotypes like Rato Rayo, CO 1002, and CO 11007 showed average to the high level of field resistance; therefore, these genotypes should be used, conserved, and stored in a mustard improvement program as the disease resistance quality or susceptibility of these genotypes can be helpful for seed producing farmers, companies and other stakeholders through varietal improvement and developmental works that further aids in sustainable disease management of the vegetable.Keywords: genotype, disease resistance, Rhizoctonia root rot severity, varietal improvement
Procedia PDF Downloads 8117522 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done
Authors: Geir Kirkebøen
Abstract:
Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis
Procedia PDF Downloads 33417521 A Simple Recursive Framework to Generate Gray Codes for Weak Orders in Constant Amortized Time
Authors: Marsden Jacques, Dennis Wong
Abstract:
A weak order is a way to rank n objects where ties are allowed. In this talk, we present a recursive framework to generate Gray codes for weak orders. We then describe a simple algorithm based on the framework that generates 2-Gray codes for weak orders in constant amortized time per string. This framework can easily be modified to generate other Gray codes for weak orders. We provide an example on using the framework to generate the first Shift Gray code for weak orders, also in constant amortized time, where consecutive strings differ by a shift or a symbol change.Keywords: weak order, Cayley permutation, Gray code, shift Gray code
Procedia PDF Downloads 180