Search results for: modifiable areal unit problem (MAUP)
5518 The Implications of Some Social Variables in Increasing the Unemployed in Egypt
Authors: Mohamed Elkhouli
Abstract:
This research sets out to identify some social factors or variables that may need to be controlled in order to decrease the volume of unemployed in Egypt. As well as, it comes to investigate the relationship between a set of social variables and unemployment issue in Egypt in the sake of determining the most important social variables influencing the rise of unemployed during the time series targeted (2002-2012). Highlighting the unemployment issue is becoming an increasingly important topic in all countries throughout the world resulting from expand their globalization efforts. In general, the study tries to determine what the most social priorities are likely to adopt seriously by the Egypt's government in order to solve the unemployed problem. The results showed that the low value for both of small projects and the total value of disbursed social security respectively have significant impact on increasing the No. of unemployed in Egypt, according to the target period by the current study.Keywords: Egypt, social status, unemployment, unemployed
Procedia PDF Downloads 3265517 Congestion Control in Mobile Network by Prioritizing Handoff Calls
Authors: O. A. Lawal, O. A Ojesanmi
Abstract:
The demand for wireless cellular services continues to increase while the radio resources remain limited. Thus, network operators have to continuously manage the scarce radio resources in order to have an improved quality of service for mobile users. This paper proposes how to handle the problem of congestion in the mobile network by prioritizing handoff call, using the guard channel allocation scheme. The research uses specific threshold value for the time of allocation of the channel in the algorithm. The scheme would be simulated by generating various data for different traffics in the network as it would be in the real life. The result would be used to determine the probability of handoff call dropping and the probability of the new call blocking as a way of measuring the network performance.Keywords: call block, channel, handoff, mobile cellular network
Procedia PDF Downloads 3945516 Ending the Multibillionaire: A Solution to Poverty and Violations of the Right to Health
Authors: Andreanna Kalasountas
Abstract:
A rampant health crisis is facing America. That health crisis is poverty. Millions of Americans live without knowing when they will eat or where they will sleep. Meanwhile, there are over 600 multi-billionaires in the United States. “In April 2021, U.S. billionaires had nearly twice as much combined wealth than the bottom half of Americans -- $4.56 trillion vs. $2.62 trillion.” It's disturbingly ironic that we live in a country where there are people with more money than they know what to do with (or could spend in a lifetime) while simultaneously, people are losing their life because they do not have enough money to survive. Accordingly, this paper argues for the end of the multi-billionaire; that wealth be capped, captured, and redistributed to the poorest among us. To accomplish this goal, this paper begins by identifying the problem, advocating for a new measurement of poverty; and concludes with a both legal and tax policy solutions and what implementation of those solutions would look like.Keywords: health and human rights, law and policy, poverty, wealth gap
Procedia PDF Downloads 1035515 The Consequences of Complaint Offenses against Copyright Protection
Authors: Chryssantus Kastowo, Theresia Anita Christiani, Anny Retnowati
Abstract:
Copyright infringement as a form of infringement does not always mean causing harm to the creator. This can be proven with so many copyright violations in society and there is no significant law enforcement effort when compared with the violations that occurred. Copyright law as a form of appreciation from the state to the creator becomes counter productive if there is omission of violations. The problem raised in this article is how is the model of copyright regulation in accordance with the purpose of the law of copyright protection. This article is based on normative legal research focusing on secondary data. The analysis used is a conceptual approach. The analysis shows that the regulation of copyright emphasizes as a subjective right that is wholly within the author's power. This perspective will affect the claim of rights by the creator or allow violations. The creator is obliged to maintain the overall performance of copyright protection, especially in the event of a violation.Keywords: copyright, enforcement, law, violation
Procedia PDF Downloads 1355514 Deepfake Detection for Compressed Media
Authors: Sushil Kumar Gupta, Atharva Joshi, Ayush Sonawale, Sachin Naik, Rajshree Khande
Abstract:
The usage of artificially created videos and audio by deep learning is a major problem of the current media landscape, as it pursues the goal of misinformation and distrust. In conclusion, the objective of this work targets generating a reliable deepfake detection model using deep learning that will help detect forged videos accurately. In this work, CelebDF v1, one of the largest deepfake benchmark datasets in the literature, is adopted to train and test the proposed models. The data includes authentic and synthetic videos of high quality, therefore allowing an assessment of the model’s performance against realistic distortions.Keywords: deepfake detection, CelebDF v1, convolutional neural network (CNN), xception model, data augmentation, media manipulation
Procedia PDF Downloads 95513 Effects of Changes in LULC on Hydrological Response in Upper Indus Basin
Authors: Ahmad Ammar, Umar Khan Khattak, Muhammad Majid
Abstract:
Empirically based lumped hydrologic models have an extensive track record of use for various watershed managements and flood related studies. This study focuses on the impacts of LULC change for 10 year period on the discharge in watershed using lumped model HEC-HMS. The Indus above Tarbela region acts as a source of the main flood events in the middle and lower portions of Indus because of the amount of rainfall and topographic setting of the region. The discharge pattern of the region is influenced by the LULC associated with it. In this study the Landsat TM images were used to do LULC analysis of the watershed. Satellite daily precipitation TRMM data was used as input rainfall. The input variables for model building in HEC-HMS were then calculated based on the GIS data collected and pre-processed in HEC-GeoHMS. SCS-CN was used as transform model, SCS unit hydrograph method was used as loss model and Muskingum was used as routing model. For discharge simulation years 2000 and 2010 were taken. HEC-HMS was calibrated for the year 2000 and then validated for 2010.The performance of the model was assessed through calibration and validation process and resulted R2=0.92 during calibration and validation. Relative Bias for the years 2000 was -9% and for2010 was -14%. The result shows that in 10 years the impact of LULC change on discharge has been negligible in the study area overall. One reason is that, the proportion of built-up area in the watershed, which is the main causative factor of change in discharge, is less than 1% of the total area. However, locally, the impact of development was found significant in built up area of Mansehra city. The analysis was done on Mansehra city sub-watershed with an area of about 16 km2 and has more than 13% built up area in 2010. The results showed that with an increase of 40% built-up area in the city from 2000 to 2010 the discharge values increased about 33 percent, indicating the impact of LULC change on discharge value.Keywords: LULC change, HEC-HMS, Indus Above Tarbela, SCS-CN
Procedia PDF Downloads 5135512 Research on the Optimization of Satellite Mission Scheduling
Authors: Pin-Ling Yin, Dung-Ying Lin
Abstract:
Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling
Procedia PDF Downloads 255511 Synthesis and Characterizations of Lead-free BaO-Doped TeZnCaB Glass Systems for Radiation Shielding Applications
Authors: Rezaul K. Sk., Mohammad Ashiq, Avinash K. Srivastava
Abstract:
The use of radiation shielding technology ranging from EMI to high energy gamma rays in various areas such as devices, medical science, defense, nuclear power plants, medical diagnostics etc. is increasing all over the world. However, exposure to different radiations such as X-ray, gamma ray, neutrons and EMI above the permissible limits is harmful to living beings, the environment and sensitive laboratory equipment. In order to solve this problem, there is a need to develop effective radiation shielding materials. Conventionally, lead and lead-based materials are used in making shielding materials, as lead is cheap, dense and provides very effective shielding to radiation. However, the problem associated with the use of lead is its toxic nature and carcinogenic. So, to overcome these drawbacks, there is a great need for lead-free radiation shielding materials and that should also be economically sustainable. Therefore, it is necessary to look for the synthesis of radiation-shielding glass by using other heavy metal oxides (HMO) instead of lead. The lead-free BaO-doped TeZnCaB glass systems have been synthesized by the traditional melt-quenching method. X-ray diffraction analysis confirmed the glassy nature of the synthesized samples. The densities of the developed glass samples were increased by doping the BaO concentration, ranging from 4.292 to 4.725 g/cm3. The vibrational and bending modes of the BaO-doped glass samples were analyzed by Raman spectroscopy, and FTIR (Fourier-transform infrared spectroscopy) was performed to study the functional group present in the samples. UV-visible characterization revealed the significance of optical parameters such as Urbach’s energy, refractive index and optical energy band gap. The indirect and direct energy band gaps were decreased with the BaO concentration whereas the refractive index was increased. X-ray attenuation measurements were performed to determine the radiation shielding parameters such as linear attenuation coefficient (LAC), mass attenuation coefficient (MAC), half value layer (HVL), tenth value layer (TVL), mean free path (MFP), attenuation factor (Att%) and lead equivalent thickness of the lead-free BaO-doped TeZnCaB glass system. It was observed that the radiation shielding characteristics were enhanced with the addition of BaO content in the TeZnCaB glass samples. The glass samples with higher contents of BaO have the best attenuation performance. So, it could be concluded that the addition of BaO into TeZnCaB glass samples is a significant technique to improve the radiation shielding performance of the glass samples. The best lead equivalent thickness was 2.626 mm, and these glasses could be good materials for medical diagnostics applications.Keywords: heavy metal oxides, lead-free, melt-quenching method, x-ray attenuation
Procedia PDF Downloads 315510 Computational Fluid Dynamics Based Analysis of Heat Exchanging Performance of Rotary Thermal Wheels
Authors: H. M. D. Prabhashana Herath, M. D. Anuradha Wickramasinghe, A. M. C. Kalpani Polgolla, R. A. C. Prasad Ranasinghe, M. Anusha Wijewardane
Abstract:
The demand for thermal comfort in buildings in hot and humid climates increases progressively. In general, buildings in hot and humid climates spend more than 60% of the total energy cost for the functionality of the air conditioning (AC) system. Hence, it is required to install energy efficient AC systems or integrate energy recovery systems for both new and/or existing AC systems whenever possible, to reduce the energy consumption by the AC system. Integrate a Rotary Thermal Wheel as the energy recovery device of an existing AC system has shown very promising with attractive payback periods of less than 5 years. A rotary thermal wheel can be located in the Air Handling Unit (AHU) of a central AC system to recover the energy available in the return air stream. During this study, a sensitivity analysis was performed using a CFD (Computational Fluid Dynamics) software to determine the optimum design parameters (i.e., rotary speed and parameters of the matrix profile) of a rotary thermal wheel for hot and humid climates. The simulations were performed for a sinusoidal matrix geometry. Variation of sinusoidal matrix parameters, i.e., span length and height, were also analyzed to understand the heat exchanging performance and the induced pressure drop due to the air flow. The results show that the heat exchanging performance increases when increasing the wheel rpm. However, the performance increment rate decreases when increasing the rpm. As a result, it is more advisable to operate the wheel at 10-20 rpm. For the geometry, it was found that the sinusoidal geometries with lesser spans and higher heights have higher heat exchanging capabilities. Considering the sinusoidal profiles analyzed during the study, the geometry with 4mm height and 3mm width shows better performance than the other combinations.Keywords: air conditioning, computational fluid dynamics, CFD, energy recovery, heat exchangers
Procedia PDF Downloads 1295509 Cosmic Dust as Dark Matter
Authors: Thomas Prevenslik
Abstract:
Weakly Interacting Massive Particle (WIMP) experiments suggesting dark matter does not exist are consistent with the argument that the long-standing galaxy rotation problem may be resolved without the need for dark matter if the redshift measurements giving the higher than expected galaxy velocities are corrected for the redshift in cosmic dust. Because of the ubiquity of cosmic dust, all velocity measurements in astronomy based on redshift are most likely overstated, e.g., an accelerating Universe expansion need not exist if data showing supernovae brighter than expected based on the redshift/distance relation is corrected for the redshift in dust. Extensions of redshift corrections for cosmic dust to other historical astronomical observations are briefly discussed.Keywords: alternative theories, cosmic dust redshift, doppler effect, quantum mechanics, quantum electrodynamics
Procedia PDF Downloads 2975508 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction
Authors: Pontus Backstrom
Abstract:
In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling
Procedia PDF Downloads 1345507 An Exploration Survival Risk Factors of Stroke Patients at a General Hospital in Northern Taiwan
Authors: Hui-Chi Huang, Su-Ju Yang, Ching-Wei Lin, Jui-Yao Tsai, Liang-Yiang
Abstract:
Background: The most common serious complication following acute stroke is pneumonia. It has been associated with the increased morbidity, mortality, and medical cost after acute stroke in elderly patients. Purpose: The aim of this retrospective study was to investigate the relationship between stroke patients, risk factors of pneumonia, and one-year survival rates in a group of patients, in a tertiary referal center in Northern Taiwan. Methods: From January 2012 to December 2013, a total of 1730 consecutively administered stroke patients were recruited. The Survival analysis and multivariate regression analyses were used to examine the predictors for the one-year survival in stroke patients of a stroke registry database from northern Taiwan. Results: The risk of stroke mortality increased with age≧ 75 (OR=2.305, p < .0001), cancer (OR=3.221, p=<.0001), stayed in intensive care unit (ICU) (OR=2.28, p <.0006), dysphagia (OR=5.026, p<.0001), without speech therapy(OR=0.192, p < .0001),serum albumin < 2.5(OR=0.322, p=.0053) , eGFR > 60(OR=0.438, p <. 0001), admission NIHSS >11(OR=1.631, p=.0196), length of hospitalization (d) > 30(OR=0.608, p=.0227), and stroke subtype (OR=0.506, p=.0032). After adjustment of confounders, pneumonia was not significantly associated with the risk of mortality. However, it is most likely to develop in patients who are age ≧ 75, dyslipidemia , coronary artery disease , albumin < 2.5 , eGFR <60 , ventilator use , stay in ICU , dysphagia, without speech therapy , urinary tract infection , Atrial fibrillation , Admission NIHSS > 11, length of hospitalization > 30(d) , stroke severity (mRS=3-5) ,stroke Conclusion: In this study, different from previous research findings, we found that elderly age, severe neurological deficit and rehabilitation therapy were significantly associated with Post-stroke Pneumonia. However, specific preventive strategies are needed to target the high risk groups to improve their long-term outcomes after acute stroke. These findings could open new avenues in the management of stroke patients.Keywords: stroke, risk, pneumonia, survival
Procedia PDF Downloads 2425506 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 4835505 Analyzing Doctors’ Knowledge of the United Kingdom Chief Medical Officer's Guidelines for Physical Activity: Survey of Secondary Care Doctors in a District General Hospital
Authors: Alexandra Von Guionneau, William Sloper, Charlotte Burford
Abstract:
The benefits of exercise for the prevention and management of chronic disease are well established and the importance of primary care practitioners in promoting exercise is becoming increasingly recognized. However, those with severe manifestations of the chronic disease are managed in a secondary care setting. Secondary care practitioners, therefore, have a role to play in promoting physical activity. Methods: In order to assess secondary care doctors’ knowledge of the Chief Medical Officer’s guidelines for physical activity, a 12-question survey was administered to staff working in a district general hospital in South England during team and unit meetings. Questions related to knowledge of the current guidelines for both 19 - 64 year olds and older adults (65 years and above), barriers to exercise discussion or prescription and doctors’ own exercise habits. Responses were collected anonymously and analyzed using SPSS Version 24.0. Results: 96 responses were collected. Doctors taking part in the survey ranged from foundation years (26%) to consultants (40%). 17.7% of participants knew the guidelines for moderate intensity activity for 19 - 64 year olds. Only one participant knew all of the guidance for both 19 - 64 year olds and older adults. While 71.6% of doctors felt they were adequately informed about how to exercise, only 45.6% met the minimum recommended guidance for moderate intensity activity. Conclusion: More work is needed to promote the physical activity guidelines and exercise prescription to doctors working within a secondary care setting. In addition, doctors require more support to personally meet the recommended minimum level of physical activity.Keywords: exercise is medicine, exercise prescription, physical activity guidelines, exercise habits
Procedia PDF Downloads 2505504 Exponential Spline Solution for Singularly Perturbed Boundary Value Problems with an Uncertain-But-Bounded Parameter
Authors: Waheed Zahra, Mohamed El-Beltagy, Ashraf El Mhlawy, Reda Elkhadrawy
Abstract:
In this paper, we consider singular perturbation reaction-diffusion boundary value problems, which contain a small uncertain perturbation parameter. To solve these problems, we propose a numerical method which is based on an exponential spline and Shishkin mesh discretization. While interval analysis principle is used to deal with the uncertain parameter, sensitivity analysis has been conducted using different methods. Numerical results are provided to show the applicability and efficiency of our method, which is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, shishkin mesh, two small parameters, exponential spline, interval analysis, sensitivity analysis
Procedia PDF Downloads 2745503 Some Basic Problems for the Elastic Material with Voids in the Case of Approximation N=1 of Vekua's Theory
Authors: Bakur Gulua
Abstract:
In this work, we consider some boundary value problems for the plate. The plate is the elastic material with voids. The state of plate equilibrium is described by the system of differential equations that is derived from three-dimensional equations of equilibrium of an elastic material with voids (Cowin-Nunziato model) by Vekua's reduction method. Its general solution is represented by means of analytic functions of a complex variable and solutions of Helmholtz equations. The problem is solved analytically by the method of the theory of functions of a complex variable.Keywords: the elastic material with voids, boundary value problems, Vekua's reduction method, a complex variable
Procedia PDF Downloads 1275502 Biodegradability and Thermal Properties of Polycaprolactone/Starch Nanocomposite as a Biopolymer
Authors: Emad A. Jaffar Al-Mulla
Abstract:
In this study, a biopolymer-based nanocomposite was successfully prepared through melt blending technique. Two biodegradable polymers, polycaprolactone and starch, environmental friendly and obtained from renewable, easily available raw materials, have been chosen. Fatty hydrazide, synthesized from palm oil, has been used as a surfactant to modify montmorillonite (natural clay) for preparation of polycaprolactone/starch nanocomposite. X-ray diffraction and transmission electron microscopy were used to characterize nanocomposite formation. Compatibility of the blend was improved by adding 3% weight modified clay. Higher biodegradability and thermal stability of nanocomopeite were also observed compared to those of the polycaprolactone/starch blend. This product will solve the problem of plastic waste, especially disposable packaging, and reduce the dependence on petroleum-based polymers and surfactants.Keywords: polycaprolactone, starch, biodegradable, nanocomposite
Procedia PDF Downloads 3585501 Passive Solar Distiller with Low Cost of Implementation, Operation and Maintenance
Authors: Valentina Alessandra Carvalho do Vale, Elmo Thiago Lins Cöuras Ford, Rudson de Sousa Lima
Abstract:
Around the planet Earth, access to clean water is a problem whose importance has increased due to population growth and its misuse. Thus, projects that seek to transform water sources improper (salty and brackish) in drinking water sources are current issues. However, this transformation generally requires a high cost of implementation, operation and maintenance. In this context, the aim of this work is the development of a passive solar distiller for brackish water, made from recycled and durable materials such as aluminum, cement, glass and PVC basins. The results reveal factors that influence the performance and viability of the expansion project.Keywords: solar distiller, passive distiller, distiller with pyramidal roof, ecologically correct
Procedia PDF Downloads 4145500 Towards a Common Architecture for Cloud Computing Interoperability
Authors: Sana Kouchi, Hassina Nacer, Kadda Beghdad-bey
Abstract:
Cloud computing is growing very fast in the market and has become one of the most controversial discussed developments in recent years. Cloud computing providers become very numerous in these areas and each of them prefers its own cloud computing infrastructure, due to the incompatibility of standards and cloud access formats, which prevents them from accepting to support cloud computing applications in a standardized manner, this heterogeneity creates the problem of interoperability between clouds, and considering that cloud customers are probably in search of an interoperable cloud computing, where they will have total control over their applications and simply migrate their services as needed, without additional development investment. A cloud federation strategy should be considered. In this article, we propose a common architecture for the cloud that is based on existing architectures and also the use of best practices from ICT frameworks, such as IBM, ITIL, NIST, etc., to address the interoperability of architectures issues in a multi-cloud system.Keywords: cloud computing, reference architecture, interoperability, standard
Procedia PDF Downloads 1725499 Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
Authors: M. Khalid, G. N. Singh
Abstract:
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.Keywords: successive sampling, random non-response, auxiliary variable, bias, mean square error
Procedia PDF Downloads 5215498 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 605497 The Review of Coiled Tubing Intelligent Sidetracking Steering Technology
Authors: Zhao Xueran, Yang Dong
Abstract:
In order to improve the problem that old wells in oilfields are shut down due to low oil recovery, sidetracking has become one of the main technical means to restore the vitality of old wells. A variety of sidetracking technologies have been researched and formed internationally. Among them, coiled tubing sidetracking horizontal wells have significant advantages over conventional sidetracking methods: underbalanced pressure operations; reducing the number of trips of tubing, while drilling and production, saving construction costs, less ground equipment and less floor space, orienter guidance to reduce drilling friction, etc. This paper mainly introduces the steering technology in coiled tubing intelligent sidetracking at home and abroad, including the orienter and the rotary steerable system.Keywords: sidetracking, coiled tubing, orienter, rotary steering system
Procedia PDF Downloads 1685496 Intelligent System for Diagnosis Heart Attack Using Neural Network
Authors: Oluwaponmile David Alao
Abstract:
Misdiagnosis has been the major problem in health sector. Heart attack has been one of diseases that have high level of misdiagnosis recorded on the part of physicians. In this paper, an intelligent system has been developed for diagnosis of heart attack in the health sector. Dataset of heart attack obtained from UCI repository has been used. This dataset is made up of thirteen attributes which are very vital in diagnosis of heart disease. The system is developed on the multilayer perceptron trained with back propagation neural network then simulated with feed forward neural network and a recognition rate of 87% was obtained which is a good result for diagnosis of heart attack in medical field.Keywords: heart attack, artificial neural network, diagnosis, intelligent system
Procedia PDF Downloads 6555495 Application of the Discrete Rationalized Haar Transform to Distributed Parameter System
Authors: Joon-Hoon Park
Abstract:
In this paper the rationalized Haar transform is applied for distributed parameter system identification and estimation. A distributed parameter system is a dynamical and mathematical model described by a partial differential equation. And system identification concerns the problem of determining mathematical models from observed data. The Haar function has some disadvantages of calculation because it contains irrational numbers, for these reasons the rationalized Haar function that has only rational numbers. The algorithm adopted in this paper is based on the transform and operational matrix of the rationalized Haar function. This approach provides more convenient and efficient computational results.Keywords: distributed parameter system, rationalized Haar transform, operational matrix, system identification
Procedia PDF Downloads 5095494 Structural Elucidation of Intact Rough-Type Lipopolysaccharides using Field Asymmetric Ion Mobility Spectrometry and Kendrick Mass Defect Plots
Authors: Abanoub Mikhael, Darryl Hardie, Derek Smith, Helena Petrosova, Robert Ernst, David Goodlett
Abstract:
Lipopolysaccharide (LPS) is a hallmark virulence factor of Gram-negative bacteria. It is a complex, structurally het- erogeneous mixture due to variations in number, type, and position of its simplest units: fatty acids and monosaccharides. Thus, LPS structural characterization by traditional mass spectrometry (MS) methods is challenging. Here, we describe the benefits of field asymmetric ion mobility spectrometry (FAIMS) for analysis of intact R-type lipopolysaccharide complex mixture (lipooligo- saccharide; LOS). Structural characterization was performed using Escherichia coli J5 (Rc mutant) LOS, a TLR4 agonist widely used in glycoconjugate vaccine research. FAIMS gas phase fractionation improved the (S/N) ratio and number of detected LOS species. Additionally, FAIMS allowed the separation of overlapping isobars facilitating their tandem MS characterization and un- equivocal structural assignments. In addition to FAIMS gas phase fractionation benefits, extra sorting of the structurally related LOS molecules was further accomplished using Kendrick mass defect (KMD) plots. Notably, a custom KMD base unit of [Na-H] created a highly organized KMD plot that allowed identification of interesting and novel structural differences across the different LOS ion families, i.e., ions with different acylation degrees, oligosaccharides composition, and chemical modifications. Defining the composition of a single LOS ion by tandem MS along with the organized KMD plot structural network was sufficient to deduce the composition of 181 LOS species out of 321 species present in the mixture. The combination of FAIMS and KMD plots allowed in-depth characterization of the complex LOS mixture and uncovered a wealth of novel information about its structural variations.Keywords: lipopolysaccharide, ion mobility MS, Kendrick mass defect, Tandem mass spectrometry
Procedia PDF Downloads 715493 Protection of Television Programme Formats in Comparative Law
Authors: Mustafa Arikan, Ibrahim Ercan
Abstract:
In this paper, protection of program formats was investigated in terms of program formats. Protection of program formats was studied in the French Law in the sense of competition law and CPI. Since the English Judicial system exhibits differences from the legal system of Continental Europe, its investigation bears a special significance. The subject was also handled in German Law at length. Indeed, German Law was investigated in detail within the overall framework of the study. Here, the court decisions in the German Law and the views in the doctrine were expressed in general. There are many court decisions in the American legal system concerning the subject. These decisions also present alternatives in terms of a solution to the problem.Keywords: comparative law, protection of television programme formats, intellectual property, american legal system
Procedia PDF Downloads 3315492 Minimally Invasive versus Conventional Sternotomy for Aortic Valve Replacement: A Systematic Review and Meta-Analysis
Authors: Ahmed Shaboub, Yusuf Jasim Althawadi, Shadi Alaa Abdelaal, Mohamed Hussein Abdalla, Hatem Amr Elzahaby, Mohamed Mohamed, Hazem S. Ghaith, Ahmed Negida
Abstract:
Objectives: We aimed to compare the safety and outcomes of the minimally invasive approaches versus conventional sternotomy procedures for aortic valve replacement. Methods: We conducted a PRISMA-compliant systematic review and meta-analysis. We ran an electronic search of PubMed, Cochrane CENTRAL, Scopus, and Web of Science to identify the relevant published studies. Data were extracted and pooled as standardized mean difference (SMD) or risk ratio (RR) using StataMP version 17 for macOS. Results: Forty-one studies with a total of 15,065 patients were included in this meta-analysis (minimally invasive approaches n=7231 vs. conventional sternotomy n=7834). The pooled effect size showed that minimally invasive approaches had lower mortality rate (RR 0.76, 95%CI [0.59 to 0.99]), intensive care unit and hospital stays (SMD -0.16 and -0.31, respectively), ventilation time (SMD -0.26, 95%CI [-0.38 to -0.15]), 24-h chest tube drainage (SMD -1.03, 95%CI [-1.53 to -0.53]), RBCs transfusion (RR 0.81, 95%CI [0.70 to 0.93]), wound infection (RR 0.66, 95%CI [0.47 to 0.92]) and acute renal failure (RR 0.65, 95%CI [0.46 to 0.93]). However, minimally invasive approaches had longer operative time, cross-clamp, and bypass times (SMD 0.47, 95%CI [0.22 to 0.72], SMD 0.27, 95%CI [0.07 to 0.48], and SMD 0.37, 95%CI [0.20 to 0.45], respectively). There were no differences between the two groups in blood loss, endocarditis, cardiac tamponade, stroke, arrhythmias, pneumonia, pneumothorax, bleeding reoperation, tracheostomy, hemodialysis, or myocardial infarction (all P>0.05). Conclusion: Current evidence showed higher safety and better operative outcomes with minimally invasive aortic valve replacement compared to the conventional approach. Future RCTs with long-term follow-ups are recommended.Keywords: aortic replacement, minimally invasive, sternotomy, mini-sternotomy, aortic valve, meta analysis
Procedia PDF Downloads 1215491 Schrödinger Equation with Position-Dependent Mass: Staggered Mass Distributions
Authors: J. J. Peña, J. Morales, J. García-Ravelo, L. Arcos-Díaz
Abstract:
The Point canonical transformation method is applied for solving the Schrödinger equation with position-dependent mass. This class of problem has been solved for continuous mass distributions. In this work, a staggered mass distribution for the case of a free particle in an infinite square well potential has been proposed. The continuity conditions as well as normalization for the wave function are also considered. The proposal can be used for dealing with other kind of staggered mass distributions in the Schrödinger equation with different quantum potentials.Keywords: free particle, point canonical transformation method, position-dependent mass, staggered mass distribution
Procedia PDF Downloads 4035490 Basket Option Pricing under Jump Diffusion Models
Authors: Ali Safdari-Vaighani
Abstract:
Pricing financial contracts on several underlying assets received more and more interest as a demand for complex derivatives. The option pricing under asset price involving jump diffusion processes leads to the partial integral differential equation (PIDEs), which is an extension of the Black-Scholes PDE with a new integral term. The aim of this paper is to show how basket option prices in the jump diffusion models, mainly on the Merton model, can be computed using RBF based approximation methods. For a test problem, the RBF-PU method is applied for numerical solution of partial integral differential equation arising from the two-asset European vanilla put options. The numerical result shows the accuracy and efficiency of the presented method.Keywords: basket option, jump diffusion, radial basis function, RBF-PUM
Procedia PDF Downloads 3545489 Exploring Content of Home-Based Care Education After Caesarean Section Provided by Nurse Midwives in Maternity Units
Authors: Mdoe Mwajuma Bakari, Mselle Lilian Teddy, Kibusi Stephen Mathew
Abstract:
Background: Due to the increase of caesarean section (CS), many women are discharge early to their home. Women should be aware on how to take care of themselves at home after CS. Evidence shows non-uniform health education on home care after CS are provided to post CS mothers because of lack of standard home care guideline on home after CS; as existing guidelines explore only care of women in hospital setting, for health care workers. There is a need to develop post CS home care guide; exploring contents of home based care education after CS provided by nurse midwives will inform the development of the guide. Objective: To explore the content of health education provided by nurse midwives to post CS mother about home care after hospital discharge in Dodoma, Tanzania. Methodology: An exploratory qualitative study using in-depth interview was conducted in this study using triangulation of data collection method; where 14 nurse midwives working in maternity unit and 11 post CS mother attending their post-natal clinic were recruited. Content analysis was used to generate themes that describe health education information provided by nurse midwives to post CS mother about home care after hospital discharge. Results: The study found that, nutrition health education, maternal and newborn hygiene care of caesarean wound at home were the component of health education provided to post CS mothers by nurse midwives. Contradicting instruction were found to be provided to post CS mothers. Conclusion: This study reported non-uniform health education provided by the nurse midwives on home care after CS. Despite of the fact that nurse midwives recognizes the need to provide health education to the post CS mothers, there is a need to develop home care guideline as a reference for their education to ensure uniform package of education is provided to post CS mothers in order to improve recovery of post CS mothers from CS.Keywords: caesarean section, home care, discharge education, homecare after caesarean section
Procedia PDF Downloads 99