Search results for: interpolated error shifting
1656 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream, subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA)
Procedia PDF Downloads 6021655 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model
Authors: Fu Jia
Abstract:
The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping
Procedia PDF Downloads 2671654 One vs. Rest and Error Correcting Output Codes Principled Rebalancing Schemes for Solving Imbalanced Multiclass Problems
Authors: Alvaro Callejas-Ramos, Lorena Alvarez-Perez, Alexander Benitez-Buenache, Anibal R. Figueiras-Vidal
Abstract:
This contribution presents a promising formulation which allows to extend the principled binary rebalancing procedures, also known as neutral re-balancing mechanisms in the sense that they do not alter the likelihood ratioKeywords: Bregman divergences, imbalanced multiclass classifi-cation, informed re-balancing, invariant likelihood ratio
Procedia PDF Downloads 2181653 Energy Related Carbon Dioxide Emissions in Pakistan: A Decomposition Analysis Using LMDI
Authors: Arsalan Khan, Faisal Jamil
Abstract:
The unprecedented increase in anthropogenic gases in recent decades has led to climatic changes worldwide. CO2 emissions are the most important factors responsible for greenhouse gases concentrations. This study decomposes the changes in overall CO2 emissions in Pakistan for the period 1990-2012 using Log Mean Divisia Index (LMDI). LMDI enables to decompose the changes in CO2 emissions into five factors namely; activity effect, structural effect, intensity effect, fuel-mix effect, and emissions factor effect. This paper confirms an upward trend of overall emissions level of the country during the period. The study finds that activity effect, structural effect and intensity effect are the three major factors responsible for the changes in overall CO2 emissions in Pakistan with activity effect as the largest contributor to overall changes in the emissions level. The structural effect is also adding to CO2 emissions, which indicates that the economic activity is shifting towards more energy-intensive sectors. However, intensity effect has negative sign representing energy efficiency gains, which indicate a good relationship between the economy and environment. The findings suggest that policy makers should encourage the diversification of the output level towards more energy efficient sub-sectors of the economy.Keywords: energy consumption, CO2 emissions, decomposition analysis, LMDI, intensity effect
Procedia PDF Downloads 4001652 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 591651 The 2017 Shanghai Model Breaking Stalemate in Chinese Education Reform: A Discussion of China’s Scheduled Experiment in Access to Higher Education Between 2017 and 2020
Authors: Ping Chou, Xiaoyan Zhou
Abstract:
Domestically and internationally, the Chinese education has long been criticized for being test-oriented, and in spite of efforts made by the Chinese government, it remains hard to find a solution. This paper intends to look at the situation in a comparatively objective manner and discuss the significance of the Shanghai Model as a newly-scheduled experiment for education reform. As a breakthrough, in addition to comprehensive inner-quality evaluation, a small but important step is to be taken in shifting focus of attention back to students by giving them more freedom in selecting certain courses for aptitude tests for college admission. As the first author of the paper has studied and taught both in Chinese and American colleges and universities, comparisons are made when the situation becomes relevant. The official solution for test-oriented education is to make students well-rounded but the writers of this paper believe that it is even more important to make the system well-rounded so it can accept a spectrum of diverse individuals with different potential.Keywords: college admission, education reform, Shanghai model, test-oriented education
Procedia PDF Downloads 3381650 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models
Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru
Abstract:
Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.Keywords: maize, stem borers, density, RapidEye, GLM
Procedia PDF Downloads 4971649 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 1631648 Modified Lot Quality Assurance Sampling (LQAS) Model for Quality Assessment of Malaria Parasite Microscopy and Rapid Diagnostic Tests in Kano, Nigeria
Authors: F. Sarkinfada, Dabo N. Tukur, Abbas A. Muaz, Adamu A. Yahuza
Abstract:
Appropriate Quality Assurance (QA) of parasite-based diagnosis of malaria to justify Artemisinin-based Combination Therapy (ACT) is essential for Malaria Programmes. In Low and Middle Income Countries (LMIC), resource constrain appears to be a major challenge in implementing the conventional QA system. We designed and implemented a modified LQAS model for QA of malaria parasite (MP) microscopy and RDT in a State Specialist Hospital (SSH) and a University Health Clinic (UHC) in Kano, Nigeria. The capacities of both facilities for MP microscopy and RDT were assessed before implementing a modified LQAS over a period of 3 months. Quality indicators comprising the qualities of blood film and staining, MP positivity rates, concordance rates, error rates (in terms of false positives and false negatives), sensitivity and specificity were monitored and evaluated. Seventy one percent (71%) of the basic requirements for malaria microscopy was available in both facilities, with the absence of certifies microscopists, SOPs and Quality Assurance mechanisms. A daily average of 16 to 32 blood samples were tested with a blood film staining quality of >70% recorded in both facilities. Using microscopy, the MP positivity rates were 50.46% and 19.44% in SSH and UHS respectively, while the MP positivity rates were 45.83% and 22.78% in SSH and UHS when RDT was used. Higher concordance rates of 88.90% and 93.98% were recorded in SSH and UHC respectively using microscopy, while lower rates of 74.07% and 80.58% in SSH and UHC were recorded when RDT was used. In both facilities, error rates were higher when RDT was used than with microscopy. Sensitivity and specificity were higher when microscopy was used (95% and 84% in SSH; 94% in UHC) than when RDT was used (72% and 76% in SSH; 78% and 81% in UHC). It could be feasible to implement an integrated QA model for MP microscopy and RDT using modified LQAS in Malaria Control Programmes in Low and Middle Income Countries that might have resource constrain for parasite-base diagnosis of malaria to justify ACT treatment.Keywords: malaria, microscopy, quality assurance, RDT
Procedia PDF Downloads 2261647 The Role of E-Learning in Science, Technology, Engineering, and Math Education
Authors: Annette McArthur
Abstract:
The traditional model of teaching and learning, where ICT sits as a separate entity is not a model for a 21st century school. It is imperative that teaching and learning embraces technological advancements. The challenge in schools lies in shifting the mindset of teachers so they see ICT as integral to their teaching, learning and curriculum rather than a separate E-Learning curriculum stream. This research project investigates how the effective, planned, intentional integration of ICT into a STEM curriculum, can enable the shift in the teacher mindset. The project incorporated: • Developing a professional coaching relationship with key STEM teachers. • Facilitating staff professional development involving student centered project based learning pedagogy in the context of a STEM curriculum. • Facilitating staff professional development involving digital literacy. • Establishing a professional community where collaboration; sharing and reflection were part of the culture of the STEM community. • Facilitating classroom support for the effective delivery innovative STEM curriculum. • Developing STEM learning spaces where technologies were used to empower and engage learners to participate in student-centered, project-based learning.Keywords: e-learning, ICT, project based learning, STEM
Procedia PDF Downloads 3011646 Modeling and Temperature Control of Water-cooled PEMFC System Using Intelligent Algorithm
Authors: Chen Jun-Hong, He Pu, Tao Wen-Quan
Abstract:
Proton exchange membrane fuel cell (PEMFC) is the most promising future energy source owing to its low operating temperature, high energy efficiency, high power density, and environmental friendliness. In this paper, a comprehensive PEMFC system control-oriented model is developed in the Matlab/Simulink environment, which includes the hydrogen supply subsystem, air supply subsystem, and thermal management subsystem. Besides, Improved Artificial Bee Colony (IABC) is used in the parameter identification of PEMFC semi-empirical equations, making the maximum relative error between simulation data and the experimental data less than 0.4%. Operation temperature is essential for PEMFC, both high and low temperatures are disadvantageous. In the thermal management subsystem, water pump and fan are both controlled with the PID controller to maintain the appreciate operation temperature of PEMFC for the requirements of safe and efficient operation. To improve the control effect further, fuzzy control is introduced to optimize the PID controller of the pump, and the Radial Basis Function (RBF) neural network is introduced to optimize the PID controller of the fan. The results demonstrate that Fuzzy-PID and RBF-PID can achieve a better control effect with 22.66% decrease in Integral Absolute Error Criterion (IAE) of T_st (Temperature of PEMFC) and 77.56% decrease in IAE of T_in (Temperature of inlet cooling water) compared with traditional PID. In the end, a novel thermal management structure is proposed, which uses the cooling air passing through the main radiator to continue cooling the secondary radiator. In this thermal management structure, the parasitic power dissipation can be reduced by 69.94%, and the control effect can be improved with a 52.88% decrease in IAE of T_in under the same controller.Keywords: PEMFC system, parameter identification, temperature control, Fuzzy-PID, RBF-PID, parasitic power
Procedia PDF Downloads 861645 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School
Authors: Shofiayuningtyas Luftiani
Abstract:
Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis
Procedia PDF Downloads 1831644 Design and Simulation of an Inter-Satellite Optical Wireless Communication System Using Diversity Techniques
Authors: Sridhar Rapuru, D. Mallikarjunreddy, Rajanarendra Sai
Abstract:
In this reign of the internet, the access of any multimedia file to the users at any time with a superior quality is needed. To achieve this goal, it is very important to have a good network without any interruptions between the satellites along with various earth stations. For that purpose, a high speed inter-satellite optical wireless communication system (IsOWC) is designed with space and polarization diversity techniques. IsOWC offers a high bandwidth, small size, less power requirement and affordable when compared with the present microwave satellite systems. To improve the efficiency and to reduce the propagation delay, inter-satellite link is established between the satellites. High accurate tracking systems are required to establish the reliable connection between the satellites as they have their own orbits. The only disadvantage of this IsOWC system is laser beam width is narrower than the RF because of this highly accurate tracking system to meet this requirement. The satellite uses the 'ephemerides data' for rough pointing and tracking system for fine pointing to the other satellite. In this proposed IsOWC system, laser light is used as a wireless connectedness between the source and destination and free space acts as the channel to carry the message. The proposed system will be designed, simulated and analyzed for 6000km with an improvement of data rate over previously existing systems. The performance parameters of the system are Q-factor, eye opening, bit error rate, etc., The proposed system for Inter-satellite Optical Wireless Communication System Design Using Diversity Techniques finds huge scope of applications in future generation communication purposes.Keywords: inter-satellite optical wireless system, space and polarization diversity techniques, line of sight, bit error rate, Q-factor
Procedia PDF Downloads 2701643 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 1461642 Preliminary WRF SFIRE Simulations over Croatia during the Split Wildfire in July 2017
Authors: Ivana Čavlina Tomašević, Višnjica Vučetić, Maja Telišman Prtenjak, Barbara Malečić
Abstract:
The Split wildfire on the mid-Adriatic Coast in July 2017 is one of the most severe wildfires in Croatian history, given the size and unexpected fire behavior, and it is used in this research as a case study to run the Weather Research and Forecasting Spread Fire (WRF SFIRE) model. This coupled fire-atmosphere model was successfully run for the first time ever for one Croatian wildfire case. Verification of coupled simulations was possible by using the detailed reconstruction of the Split wildfire. Specifically, precise information on ignition time and location, together with mapped fire progressions and spotting within the first 30 hours of the wildfire, was used for both – to initialize simulations and to evaluate the model’s ability to simulate fire’s propagation and final fire scar. The preliminary simulations were obtained using high-resolution vegetation and topography data for the fire area, additionally interpolated to fire grid spacing at 33.3 m. The results demonstrated that the WRF SFIRE model has the ability to work with real data from Croatia and produce adequate results for forecasting fire spread. As the model in its setup has the ability to include and exclude the energy fluxes between the fire and the atmosphere, this was used to investigate possible fire-atmosphere interactions during the Split wildfire. Finally, successfully coupled simulations provided the first numerical evidence that a wildfire from the Adriatic coast region can modify the dynamical structure of the surrounding atmosphere, which agrees with observations from fire grounds. This study has demonstrated that the WRF SFIRE model has the potential for operational application in Croatia with more accurate fire predictions in the future, which could be accomplished by inserting the higher-resolution input data into the model without interpolation. Possible uses for fire management in Croatia include prediction of fire spread and intensity that may vary under changing weather conditions, available fuels and topography, planning effective and safe deployment of ground and aerial firefighting forces, preventing wildland-urban interface fires, effective planning of evacuation routes etc. In addition, the WRF SFIRE model results from this research demonstrated that the model is important for fire weather research and education purposes in order to better understand this hazardous phenomenon that occurs in Croatia.Keywords: meteorology, agrometeorology, fire weather, wildfires, couple fire-atmosphere model
Procedia PDF Downloads 921641 Tigers in Film: Past, Present and Future Perspectives
Authors: Farah Benbouabdellah
Abstract:
This research examines the shifting portrayal of tigers in visual media, particularly cinema, to explore how cultural, political, and ecological perspectives influence animal symbolism. Through an interdisciplinary approach combining film studies, anthropology, art history, and material culture, this study investigates tiger representations in static and moving images, from early art forms to 20th-century films. The research highlights how the film has perpetuated, transformed, and politicised tiger imagery across contexts by analysing colonialism, identity, and ecological change themes. With a comprehensive focus on Indian and Western cinema, this study illustrates the tiger's enduring role as a cultural symbol and its impact on visual narratives, exploring techniques in cinematography, audience reception, and narratives that helped shape the animal's iconic status. This research aims to provide a comprehensive view of tiger representations in media, addressing the intersection of animal symbolism and sociocultural values across historical and regional landscapes.Keywords: tiger representation, visual media, anthropology media, material culture, film studies, comparative analysis
Procedia PDF Downloads 111640 Building Energy Modeling for Networks of Data Centers
Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody
Abstract:
The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.Keywords: data-centers, energy, life cycle, network simulation
Procedia PDF Downloads 1481639 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization
Procedia PDF Downloads 1581638 Infernal Affairs (Hong Kong) versus Double Face (Japan): Remaking and Context
Authors: Roman Kusaiko
Abstract:
For decades, remaking was one of the film industry’s main practices but has become vivid in recent years. The latest geopolitical developments, though, are becoming a new challenge for filmmakers regarding cultural landscapes and contextual differences. Deglobalization may also affect transnational remaking practices. Thus, these upcoming challenges can be addressed through the analysis of contemporary academic thought, primarily from adaptation and film studies and their understanding of the issues of transmediality and how it affects film remaking. However, the analysis would be insufficient without conducting case studies. This paper is part of broader research about transnational remaking practices and their cultural and contextual specifics. This paper aims to understand whether shifting medium affects remaking as a critical category and present case studies of the popular Hong Kong motion picture Infernal Affairs and its transition into the Japanese remake Double Face. Consequently, the analysis of their contextual distinctions will lead to the correct categorization of the transnational remakes allowing scholars and filmmakers to better understand the existing remaking practices and whether they affect the final result.Keywords: cinema, context, culture, films, remaking, transmediality
Procedia PDF Downloads 971637 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.Keywords: GPS, IMU, Kalman filter, materials engineering
Procedia PDF Downloads 4221636 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space
Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari
Abstract:
Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.Keywords: amino acid, head space, gas chromatography, total error
Procedia PDF Downloads 1501635 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 331634 A Single Loop Repetitive Controller for a Four Legs Matrix Converter Unit
Authors: Wesam Rohouma
Abstract:
The aim of this paper is to investigate the use of repetitive controller to regulate the output voltage of three phase four leg matric converter for an Aircraft Ground Power Supply Unit. The proposed controller improve the steady state error and provide good regulation during different loading. Simulation results of 7.5 KW converter are presented to verify the operation of the proposed controller.Keywords: matrix converter, Power electronics, controller, regulation
Procedia PDF Downloads 15061633 Influences of Emerging Beauty Industry for Men on Construction of Masculinities of Male Students of Dhaka City
Authors: Abu Saleh Mohammad Sowad
Abstract:
Back in history, muscular and strong male body has always been used to promulgate masculinity; for physically representing supreme manliness there were not many other options. This idealized male figure was proliferated mainly for spreading the notion of male superiority in relation to power and to give a strong base to the social construction of masculinity. This study targets to disclose the perception about the attributes masculinities among the male students of Dhaka city regarding male beautification. It is an attempt to unveil young men’s perspectives regarding their masculinities and beauty. Till the very recent past, beauty was always seen as sole feminine trait in Bangladeshi society. From history we can see men have always been assumed as the ambassador of roughness but in recent time the emergence of fashion-conscious men can be seen, who are slowly occupying a handsome position in the society. Concerning study attempts to bring out the way in which such changing trend of male beauty is perceived among the male students of Dhaka city. What could be the ideologies of these young men who are being involved with it? What is influencing them to be part of such arena which, to a great extent, is still considered as female domain? Is their perception about construction of masculinity is shifting from the so called idealized masculinity? The study tries to find out the answers.Keywords: masculinity, male beauty, Bangladesh, identity, body
Procedia PDF Downloads 4741632 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation
Procedia PDF Downloads 2001631 Beyond the Economics of Food: Household Food Strategies in Clusters of the Umkhanyakude District Municipality
Authors: Mduduzi Nhlozi
Abstract:
Food insecurity continues to persist in rural areas of South Africa today. A number of factors can be attributed to this including declining rural economies, rising unemployment, natural disasters such as drought as well as shifting cultural norms, values, traditions and beliefs. This paper explores mechanisms used by rural households to achieve food security in the midst of various threats and risks to their livelihoods. The study used semi-structured questionnaire to collect information on lived experiences of households in their quest to access and ensure availability of food. The paper finds that households use a number of food strategies namely economy-related, culture-related and rite-of-passage related strategies to achieve food security. The thrust of argument in the paper is that there is a need for food security studies to move beyond the orthodox, economic analytic framework, towards new institutional economics, focusing on local governance and socio-cultural systems supporting households to achieve food security. It advocates for localised food security plans to be developed by local municipalities to improve food security status for rural households.Keywords: household, food insecurity, food strategies, new institutional economics, umkhanyakude
Procedia PDF Downloads 1211630 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant
Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani
Abstract:
Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning
Procedia PDF Downloads 391629 An Experiment Research on the Effect of Brain-Break in the Classroom on Elementary School Students’ Selective Attention
Authors: Hui Liu, Xiaozan Wang, Jiarong Zhong, Ziming Shao
Abstract:
Introduction: Related research shows that students don’t concentrate on teacher’s speaking in the classroom. The d2 attention test is a time-limited test about selective attention. The d2 attention test can be used to evaluate individual selective attention. Purpose: To use the d2 attention test tool to measure the difference between the attention level of the experimental class and the control class before and after Brain-Break and to explore the effect of Brain-Break in the classroom on students' selective attention. Methods: According to the principle of no difference in pre-test data, two classes in the fourth- grade of Shenzhen Longhua Central Primary School were selected. After 20 minutes of class in the third class in the morning and the third class in the afternoon, about 3-minute Brain-Break intervention was performed in the experimental class for 10 weeks. The normal class in the control class did not intervene. Before and after the experiment, the d2 attention test tool was used to test the attention level of the two-class students. The paired sample t-test and independent sample t-test in SPSS 23.0 was used to test the change in the attention level of the two-class classes around 10 weeks. This article only presents results with significant differences. Results: The independent sample t-test results showed that after ten-week of Brain-Break, the missed errors (E1 t = -2.165 p = 0.042), concentration performance (CP t = 1.866 p = 0.05), and the degree of omissions (Epercent t = -2.375 p = 0.029) in experimental class showed significant differences compared with control class. The students’ error level decreased and the concentration increased. Conclusions: Adding Brain-Break interventions in the classroom can effectively improve the attention level of fourth-grade primary school students to a certain extent, especially can improve the concentration of attention and decrease the error rate in the tasks. The new sport's learning model is worth promotingKeywords: cultural class, micromotor, attention, D2 test
Procedia PDF Downloads 1341628 Protocol for Consumer Research in Academia for Community Marketing Campaigns
Authors: Agnes J. Otjen, Sarah Keller
Abstract:
A Montana university has used applied consumer research in experiential learning with non-profit clients for over a decade. Through trial and error, a successful protocol has been established from problem statement through formative research to integrated marketing campaign execution. In this paper, we describe the protocol and its applications. Analysis was completed to determine the effectiveness of the campaigns and the results of how pre- and post-consumer research mark societal change because of media.Keywords: consumer, research, marketing, communications
Procedia PDF Downloads 1401627 Understanding the Historical Consciousness of Children and Young People
Authors: Kay Carroll
Abstract:
Creating historical consciousness in children and young people is critical to global inclusion and engagement. In a context of international and technological flux, children are confronted with shifting national identities. Within this quantitative study of Australian children and young people, the concept and development of historical consciousness are explored. The analysis reports on how children and young people are connected through national, collective, and personal narratives to understand historically significant events and changes, anchor themselves to universal and intergenerational traditions and norms, be open to divergent perspectives and resilient to perpetual socio-cultural shifts. This paper presents the development and factors that shape national historical consciousness in children and young people using established international frameworks and stages of historical consciousness. This research reports on quantitative surveys conducted with over 680 school children from ages 12 years to 19 years within Australian schools. Concepts of global citizenship, inclusion, and engagement with national historical memory and significance are explored. Findings identify the social benefits of collective and personal historical consciousness and consider the current barriers and enablers in developing a young person’s historical consciousness for the future.Keywords: curriculum, global citizenship, historical consciousness, significance
Procedia PDF Downloads 198