Search results for: approximate arithmetic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 466

Search results for: approximate arithmetic

106 Organic Matter Removal in Urban and Agroindustry Wastewater by Chemical Precipitation Process

Authors: Karina Santos Silvério, Fátima Carvalho, Maria Adelaide Almeida

Abstract:

The impacts caused by anthropogenic actions on the water environment have been one of the main challenges of modern society. Population growth, added to water scarcity and climate change, points to a need to increase the resilience of production systems to increase efficiency regarding the management of wastewater generated in the different processes. Based on this context, the study developed under the NETA project (New Strategies in Wastewater Treatment) aimed to evaluate the efficiency of the Chemical Precipitation Process (CPP), using the hydrated lime (Ca(OH )₂) as a reagent in wastewater from the agroindustry sector, namely swine wastewater, slaughterhouse and urban wastewater, in order to make the productive means 100% circular, causing a direct positive impact on the environment. The purpose of CPP is to innovate in the field of effluent treatment technologies, as it allows rapid application and is economically profitable. In summary, the study was divided into four main stages: 1) Application of the reagent in a single step, raising the pH to 12.5 2) Obtaining sludge and treated effluent. 3) Natural neutralization of the effluent through Carbonation using atmospheric CO₂. 4) Characterization and evaluation of the feasibility of the chemical precipitation technique in the treatment of different wastewaters through the technique of determining the chemical oxygen demand (COD) and other supporting physical-chemical parameters. The results showed an approximate average removal efficiency above 80% for all effluents, highlighting the swine effluent with 90% removal, followed by urban effluent with 88% and slaughterhouse with 81% on average. Significant improvement was also obtained with regard to color and odor removal after Carbonation to pH 8.00.

Keywords: agroindustry wastewater, urban wastewater, natural carbonatation, chemical precipitation technique

Procedia PDF Downloads 56
105 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 441
104 Confidence Intervals for Process Capability Indices for Autocorrelated Data

Authors: Jane A. Luke

Abstract:

Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.

Keywords: autocorrelation, AR(1) model, Bissell’s approximation, confidence intervals, statistical process control, specification limits, stationary Gaussian processes

Procedia PDF Downloads 364
103 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 487
102 The Determination of the Phosphorous Solubility in the Iron by the Function of the Other Components

Authors: Andras Dezső, Peter Baumli, George Kaptay

Abstract:

The phosphorous is the important components in the steels, because it makes the changing of the mechanical properties and possibly modifying the structure. The phosphorous can be create the Fe3P compounds, what is segregated in the ferrite grain boundary in the intervals of the nano-, or microscale. This intermetallic compound is decreasing the mechanical properties, for example it makes the blue brittleness which means that the brittle created by the segregated particles at 200 ... 300°C. This work describes the phosphide solubility by the other components effect. We make calculations for the Ni, Mo, Cu, S, V, C, Si, Mn, and the Cr elements by the Thermo-Calc software. We predict the effects by approximate functions. The binary Fe-P system has a solubility line, which has a determinating equation. The result is below: lnwo = -3,439 – 1.903/T where the w0 means the weight percent of the maximum soluted concentration of the phosphorous, and the T is the temperature in Kelvin. The equation show that the P more soluble element when the temperature increasing. The nickel, molybdenum, vanadium, silicon, manganese, and the chromium make dependence to the maximum soluted concentration. These functions are more dependent by the elements concentration, which are lower when we put these elements in our steels. The copper, sulphur and carbon do not make effect to the phosphorous solubility. We predict that all of cases the maximum solubility concentration increases when the temperature more and more high. Between 473K and 673 K, in the phase diagram, these systems contain mostly two or three phase eutectoid, and the singe phase, ferritic intervals. In the eutectoid areas the ferrite, the iron-phosphide, and the metal (III)-phospide are in the equilibrium. In these modelling we predicted that which elements are good for avoid the phosphide segregation or not. These datas are important when we make or choose the steels, where the phosphide segregation stopping our possibilities.

Keywords: phosphorous, steel, segregation, thermo-calc software

Procedia PDF Downloads 605
101 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 138
100 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model

Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech

Abstract:

Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.

Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM

Procedia PDF Downloads 122
99 Environment Management Practices at Oil and Natural Gas Corporation Hazira Gas Processing Complex

Authors: Ashish Agarwal, Vaibhav Singh

Abstract:

Harmful emissions from oil and gas processing facilities have long remained a matter of concern for governments and environmentalists throughout the world. This paper analyses Oil and Natural Gas Corporation (ONGC) gas processing plant in Hazira, Gujarat, India. It is the largest gas-processing complex in the country designed to process 41MMSCMD sour natural gas & associated sour condensate. The complex, sprawling over an area of approximate 705 hectares is the mother plant for almost all industries at Hazira and enroute Hazira Bijapur Jagdishpur pipeline. Various sources of pollution from each unit starting from Gas Terminal to Dew Point Depression unit and Caustic Wash unit along the processing chain were examined with the help of different emission data obtained from ONGC. Pollution discharged to the environment was classified into Water, Air, Hazardous Waste and Solid (Non-Hazardous) Waste so as to analyze each one of them efficiently. To protect air environment, Sulphur recovery unit along with automatic ambient air quality monitoring stations, automatic stack monitoring stations among numerous practices were adopted. To protect water environment different effluent treatment plants were used with due emphasis on aquaculture of the nearby area. Hazira plant has obtained the authorization for handling and disposal of five types of hazardous waste. Most of the hazardous waste were sold to authorized recyclers and the rest was given to Gujarat Pollution Control Board authorized vendors. Non-Hazardous waste was also handled with an overall objective of zero negative impact on the environment. The effect of methods adopted is evident from emission data of the plant which was found to be well under Gujarat Pollution Control Board limits.

Keywords: sulphur recovery unit, effluent treatment plant, hazardous waste, sour gas

Procedia PDF Downloads 206
98 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 360
97 Utilizing Fiber-Based Modeling to Explore the Presence of a Soft Storey in Masonry-Infilled Reinforced Concrete Structures

Authors: Akram Khelaifia, Salah Guettala, Nesreddine Djafar Henni, Rachid Chebili

Abstract:

Recent seismic events have underscored the significant influence of masonry infill walls on the resilience of structures. The irregular positioning of these walls exacerbates their adverse effects, resulting in substantial material and human losses. Research and post-earthquake evaluations emphasize the necessity of considering infill walls in both the design and assessment phases. This study delves into the presence of soft stories in reinforced concrete structures with infill walls. Employing an approximate method relying on pushover analysis results, fiber-section-based macro-modeling is utilized to simulate the behavior of infill walls. The findings shed light on the presence of soft first stories, revealing a notable 240% enhancement in resistance for weak column—strong beam-designed frames due to infill walls. Conversely, the effect is more moderate at 38% for strong column—weak beam-designed frames. Interestingly, the uniform distribution of infill walls throughout the structure's height does not influence soft-story emergence in the same seismic zone, irrespective of column-beam strength. In regions with low seismic intensity, infill walls dissipate energy, resulting in consistent seismic behavior regardless of column configuration. Despite column strength, structures with open-ground stories remain vulnerable to soft first-story emergence, underscoring the crucial role of infill walls in reinforced concrete structural design.

Keywords: masonry infill walls, soft Storey, pushover analysis, fiber section, macro-modeling

Procedia PDF Downloads 39
96 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes

Authors: Hamed K. Esfahani, Bithin Datta

Abstract:

Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.

Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites

Procedia PDF Downloads 254
95 Evaluating the Understanding of the University Students (Basic Sciences and Engineering) about the Numerical Representation of the Average Rate of Change

Authors: Saeid Haghjoo, Ebrahim Reyhani, Fahimeh Kolahdouz

Abstract:

The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.

Keywords: average rate of change, context problems, derivative, numerical representation, SOLO taxonomy

Procedia PDF Downloads 78
94 Assessment of Wastewater Reuse Potential for an Enamel Coating Industry

Authors: Guclu Insel, Efe Gumuslu, Gulten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tugba Olmez Hanci, Didem Okutman Tas, Fatos Germirli Babuna, Derya Firat Ertem, Okmen Yildirim, Ozge Erturan, Betul Kirci

Abstract:

In order to eliminate water scarcity problems, effective precautions must be taken. Growing competition for water is increasingly forcing facilities to tackle their own water scarcity problems. At this point, application of wastewater reclamation and reuse results in considerable economic advantageous. In this study, an enamel coating facility, which is one of the high water consumed facilities, is evaluated in terms of its wastewater reuse potential. Wastewater reclamation and reuse can be defined as one of the best available techniques for this sector. Hence, process and pollution profiles together with detailed characterization of segregated wastewater sources are appraised in a way to find out the recoverable effluent streams arising from enamel coating operations. Daily, 170 m3 of process water is required and 160 m3 of wastewater is generated. The segregated streams generated by two enamel coating processes are characterized in terms of conventional parameters. Relatively clean segregated wastewater streams (reusable wastewaters) are separately collected and experimental treatability studies are conducted on it. The results reflected that the reusable wastewater fraction has an approximate amount of 110 m3/day that accounts for 68% of the total wastewaters. The need for treatment applicable on reusable wastewaters is determined by considering water quality requirements of various operations and characterization of reusable wastewater streams. Ultra-filtration (UF), Nano-filtration (NF) and Reverse Osmosis (RO) membranes are subsequently applied on reusable effluent fraction. Adequate organic matter removal is not obtained with the mentioned treatment sequence.

Keywords: enamel coating, membrane, reuse, wastewater reclamation

Procedia PDF Downloads 306
93 Aristotelian Techniques of Communication Used by Current Affairs Talk Shows in Pakistan for Creating Dramatic Effect to Trigger Emotional Relevance

Authors: Shazia Anwer

Abstract:

The current TV Talk Shows, especially on domestic politics in Pakistan are following the Aristotelian techniques, including deductive reasoning, three modes of persuasion, and guidelines for communication. The application of “Approximate Truth is also seen when Talk Show presenters create doubts against political personalities or national issues. Mainstream media of Pakistan, being a key carrier of narrative construction for the sake of the primary function of national consensus on regional and extended public diplomacy, is failing the purpose. This paper has highlighted the Aristotelian communication methodology, its purposes and its limitations for a serious discussion, and its connection to the mistrust among the Pakistani population regarding fake or embedded, funded Information. Data has been collected from 3 Pakistani TV Talk Shows and their analysis has been made by applying the Aristotelian communication method to highlight the core issues. Paper has also elaborated that current media education is impaired in providing transparent techniques to train the future journalist for a meaningful, thought-provoking discussion. For this reason, this paper has given an overview of HEC’s (Higher Education Commission) graduate-level Mass Com Syllabus for Pakistani Universities. The idea of ethos, logos, and pathos are the main components of TV Talk Shows and as a result, the educated audience is lacking trust in the mainstream media, which eventually generating feelings of distrust and betrayal in the society because productions look like the genre of Drama instead of facts and analysis thus the line between Current Affairs shows and Infotainment has become blurred. In the last section, practical implication to improve meaningfulness and transparency in the TV Talk shows has been suggested by replacing the Aristotelian communication method with the cognitive semiotic communication approach.

Keywords: Aristotelian techniques of communication, current affairs talk shows, drama, Pakistan

Procedia PDF Downloads 180
92 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics

Authors: Michael Lousis

Abstract:

This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.

Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors

Procedia PDF Downloads 170
91 '3D City Model' through Quantum Geographic Information System: A Case Study of Gujarat International Finance Tec-City, Gujarat, India

Authors: Rahul Jain, Pradhir Parmar, Dhruvesh Patel

Abstract:

Planning and drawing are the important aspects of civil engineering. For testing theories about spatial location and interaction between land uses and related activities the computer based solution of urban models are used. The planner’s primary interest is in creation of 3D models of building and to obtain the terrain surface so that he can do urban morphological mappings, virtual reality, disaster management, fly through generation, visualization etc. 3D city models have a variety of applications in urban studies. Gujarat International Finance Tec-City (GIFT) is an ongoing construction site between Ahmedabad and Gandhinagar, Gujarat, India. It will be built on 3590000 m2 having a geographical coordinates of North Latitude 23°9’5’’N to 23°10’55’’ and East Longitude 72°42’2’’E to 72°42’16’’E. Therefore to develop 3D city models of GIFT city, the base map of the city is collected from GIFT office. Differential Geographical Positioning System (DGPS) is used to collect the Ground Control Points (GCP) from the field. The GCP points are used for the registration of base map in QGIS. The registered map is projected in WGS 84/UTM zone 43N grid and digitized with the help of various shapefile tools in QGIS. The approximate height of the buildings that are going to build is collected from the GIFT office and placed on the attribute table of each layer created using shapefile tools. The Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global (30 m X 30 m) grid data is used to generate the terrain of GIFT city. The Google Satellite Map is used to place on the background to get the exact location of the GIFT city. Various plugins and tools in QGIS are used to convert the raster layer of the base map of GIFT city into 3D model. The fly through tool is used for capturing and viewing the entire area in 3D of the city. This paper discusses all techniques and their usefulness in 3D city model creation from the GCP, base map, SRTM and QGIS.

Keywords: 3D model, DGPS, GIFT City, QGIS, SRTM

Procedia PDF Downloads 223
90 Congenital Heart Defect(CHD) “The Silent Crises”; The Need for New Innovative Ways to Save the Ghanaian Child - A Retrospective Study

Authors: Priscilla Akua Agyapong

Abstract:

Background: In a country of nearly 34 million people, Ghana suffers from rapidly growing pediatric CHD cases and not enough pediatric specialists to attend to the burgeoning needs of these children. Most of the cases are either missed or diagnosed late, resulting in increased mortality. According to the National Cardiothoracic Centre, 1 in every 100,000 births in Ghana has CHD; however, there is limited data on the clinical presentation and its management, one of the many reasons I decided to do this case study coupled with the loss my 2 month old niece to multiple Ventricular Septal Defect 3 years ago due late diagnoses. Method: A retrospective cohort study was performed at the child health clinic of one of Ghana’s public tertiary Institutions using data from their electronic health record (EHR) from February 2021 to April 2022. All suspected or provisionally diagnosed cases were included in the analysis. Results: Records of over 3000 children were reviewed with an approximate male to female ratio of 1:1.53 cases diagnosed during the period of study, most of whom were less than 5 years of age. 25 cases had complete clinical records, with acyanotic septal defects being the most diagnosed. 62.5% of the cases were ventricular septal defects, followed by Patent Ductus Arteriosus (23%) and Atrial Septal Defects (4.5%). Tetralogy of Fallot was the most predominant and complex cyanotic CHD with 10%. Conclusion: The indeterminate coronary anatomy of infants makes it difficult to use only echocardiography and other conventional clinical methods in screening for CHDs. There are rising modernizations and new innovative ways that can be employed in Ghana for early detection, hence preventing the delay of a potential surgical repair. It is, therefore, imperative to create the needed awareness about these “SILENT CRISES” and help save the Ghanaian child’s life.

Keywords: congenital heart defect(CHD), ventricular septal defect(VSD), atrial septal defect(ASD), patent ductus arteriosus(PDA)

Procedia PDF Downloads 63
89 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 52
88 Numerical Analysis of Gas-Particle Mixtures through Pipelines

Authors: G. Judakova, M. Bause

Abstract:

The ability to model and simulate numerically natural gas flow in pipelines has become of high importance for the design of pipeline systems. The understanding of the formation of hydrate particles and their dynamical behavior is of particular interest, since these processes govern the operation properties of the systems and are responsible for system failures by clogging of the pipelines under certain conditions. Mathematically, natural gas flow can be described by multiphase flow models. Using the two-fluid modeling approach, the gas phase is modeled by the compressible Euler equations and the particle phase is modeled by the pressureless Euler equations. The numerical simulation of compressible multiphase flows is an important research topic. It is well known that for nonlinear fluxes, even for smooth initial data, discontinuities in the solution are likely to occur in finite time. They are called shock waves or contact discontinuities. For hyperbolic and singularly perturbed parabolic equations the standard application of the Galerkin finite element method (FEM) leads to spurious oscillations (e.g. Gibb's phenomenon). In our approach, we use stabilized FEM, the streamline upwind Petrov-Galerkin (SUPG) method, where artificial diffusion acting only in the direction of the streamlines and using a special treatment of the boundary conditions in inviscid convective terms, is added. Numerical experiments show that the numerical solution obtained and stabilized by SUPG captures discontinuities or steep gradients of the exact solution in layers. However, within this layer the approximate solution may still exhibit overshoots or undershoots. To suitably reduce these artifacts we add a discontinuity capturing or shock capturing term. The performance properties of our numerical scheme are illustrated for two-phase flow problem.

Keywords: two-phase flow, gas-particle mixture, inviscid two-fluid model, euler equation, finite element method, streamline upwind petrov-galerkin, shock capturing

Procedia PDF Downloads 291
87 Climate Change Impact Due to Timber Product Imports in the UK

Authors: Juan A. Ferriz-Papi, Allan L. Nantel, Talib E. Butt

Abstract:

Buildings are thought to consume about 50% of the total energy in the UK. The use stage in a building life cycle has the largest energy consumption, although different assessments are showing that the construction can equal several years of maintenance and operations. The selection of materials with lower embodied energy is very important to reduce this consumption. For this reason, timber is one adequate material due to its low embodied energy and the capacity to be used as carbon storage. The use of timber in the construction industry is very significant. Sawn wood, for example, is one of the top 5 construction materials consumed in the UK according to National Statistics. Embodied energy for building products considers the energy consumed in extraction and production stages. However, it is not the same consideration if this product is produced locally as when considering the resource produced further afield. Transport is a very relevant matter that profoundly influences in the results of embodied energy. The case of timber use in the UK is important because the balance between imports and exports is far negative, industry consuming more imported timber than produced. Nearly 80% of sawn softwood used in construction is imported. The imports-exports deficit for sawn wood accounted for more than 180 million pounds during the first four-month period of 2016. More than 85% of these imports come from Europe (83% from the EU). The aim of this study is to analyze climate change impact due to transport for timber products consumed in the UK. An approximate estimation of energy consumed and carbon emissions are calculated considering the timber product’s import origin. The results are compared to the total consumption of each product, estimating the impact of transport on the final embodied energy and carbon emissions. The analysis of these results can help deduce that one big challenge for climate change is the reduction of external dependency, with the associated improvement of internal production of timber products. A study of different types of timber products produced in the UK and abroad is developed to understand the possibilities for this country to improve sustainability and self-management. Reuse and recycle possibilities are also considered.

Keywords: embodied energy, climate change, CO2 emissions, timber, transport

Procedia PDF Downloads 319
86 S. cerevisiae Strains Co-Cultured with Isochrysis Galbana Create Greater Biomass for Biofuel Production than Nannochloropsis sp.

Authors: Madhalasa Iyer

Abstract:

The increase in sustainable practices have encouraged the research and production of alternative fuels. New techniques of bio flocculation with the addition of yeast and bacteria strains have increased the efficiency of biofuel production. Fatty acid methyl ester (FAME) analysis in previous research has indicated that yeast can serve as a plausible enhancer for microalgal lipid production. The research hopes to identify the yeast and microalgae treatment group that produces the largest algae biomass. The mass of the dried algae is used as a proxy for TAG production correlating to the cultivation of biofuels. The study uses a model bioreactor created and built using PVC pipes, 8-port sprinkler system manifold, CO2 aquarium tank, and disposable water bottles to grow the microalgae. Nannochloropsis sp., and Isochrysis galbanawere inoculated separately in experimental group 1 and 2 with no treatments and in experimental groups 3 and 4 with each algaeco-cultured with Saccharomyces cerevisiae in the medium of standard garden stone fertilizer. S. cerevisiae was grown in a petri dish with nutrient agar medium before inoculation. A Secchi stick was used before extraction to collect data for the optical density of the microalgae. The biomass estimator was then used to measure the approximate production of biomass. The microalgae were grown and extracted with a french press to analyze secondary measurements using the dried biomass. The experimental units of Isochrysis galbana treated with the baker’s yeast strains showed an increase in the overall mass of the dried algae. S. cerevisiae proved to be an accurate and helpful addition to the solution to provide for the growth of algae. The increase in productivity of this fuel source legitimizes the possible replacement of non-renewable sources with more promising renewable alternatives. This research furthers the notion that yeast and mutants can be engineered to be employed in efficient biofuel creation.

Keywords: biofuel, co-culture, S. cerevisiae, microalgae, yeast

Procedia PDF Downloads 89
85 Processing and Evaluation of Jute Fiber Reinforced Hybrid Composites

Authors: Mohammad W. Dewan, Jahangir Alam, Khurshida Sharmin

Abstract:

Synthetic fibers (carbon, glass, aramid, etc.) are generally utilized to make composite materials for better mechanical and thermal properties. However, they are expensive and non-biodegradable. In the perspective of Bangladesh, jute fibers are available, inexpensive, and comprising good mechanical properties. The improved properties (i.e., low cost, low density, eco-friendly) of natural fibers have made them a promising reinforcement in hybrid composites without sacrificing mechanical properties. In this study, jute and e-glass fiber reinforced hybrid composite materials are fabricated utilizing hand lay-up followed by a compression molding technique. Room temperature cured two-part epoxy resin is used as a matrix. Approximate 6-7 mm thick composite panels are fabricated utilizing 17 layers of woven glass and jute fibers with different fiber layering sequences- only jute, only glass, glass, and jute alternatively (g/j/g/j---) and 4 glass - 9 jute – 4 glass (4g-9j-4g). The fabricated composite panels are analyzed through fiber volume calculation, tensile test, bending test, and water absorption test. The hybridization of jute and glass fiber results in better tensile, bending, and water absorption properties than only jute fiber-reinforced composites, but inferior properties as compared to only glass fiber reinforced composites. Among different fiber layering sequences, 4g-9j-4g fibers layering sequence resulted in better tensile, bending, and water absorption properties. The effect of chemical treatment on the woven jute fiber and chopped glass microfiber infusion are also investigated in this study. Chemically treated jute fiber and 2 wt. % chopped glass microfiber infused hybrid composite shows about 12% improvements in flexural strength as compared to untreated and no micro-fiber infused hybrid composite panel. However, fiber chemical treatment and micro-filler do not have a significant effect on tensile strength.

Keywords: compression molding, chemical treatment, hybrid composites, mechanical properties

Procedia PDF Downloads 130
84 The Potential Involvement of Platelet Indices in Insulin Resistance in Morbid Obese Children

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Association between insulin resistance (IR) and hematological parameters has long been a matter of interest. Within this context, body mass index (BMI), red blood cells, white blood cells and platelets were involved in this discussion. Parameters related to platelets associated with IR may be useful indicators for the identification of IR. Platelet indices such as mean platelet volume (MPV), platelet distribution width (PDW) and plateletcrit (PCT) are being questioned for their possible association with IR. The aim of this study was to investigate the association between platelet (PLT) count as well as PLT indices and the surrogate indices used to determine IR in morbid obese (MO) children. A total of 167 children participated in the study. Three groups were constituted. The number of cases was 34, 97 and 36 children in the normal BMI, MO and metabolic syndrome (MetS) groups, respectively. Sex- and age-dependent BMI-based percentile tables prepared by World Health Organization were used for the definition of morbid obesity. MetS criteria were determined. BMI values, homeostatic model assessment for IR (HOMA-IR), alanine transaminase-to-aspartate transaminase ratio (ALT/AST) and diagnostic obesity notation model assessment laboratory (DONMA-lab) index values were computed. PLT count and indices were analyzed using automated hematology analyzer. Data were collected for statistical analysis using SPSS for Windows. Arithmetic mean and standard deviation were calculated. Mean values of PLT-related parameters in both control and study groups were compared by one-way ANOVA followed by Tukey post hoc tests to determine whether a significant difference exists among the groups. The correlation analyses between PLT as well as IR indices were performed. Statistically significant difference was accepted as p-value < 0.05. Increased values were detected for PLT (p < 0.01) and PCT (p > 0.05) in MO group compared to those observed in children with N-BMI. Significant increases for PLT (p < 0.01) and PCT (p < 0.05) were observed in MetS group in comparison with the values obtained in children with N-BMI (p < 0.01). Significantly lower MPV and PDW values were obtained in MO group compared to the control group (p < 0.01). HOMA-IR (p < 0.05), DONMA-lab index (p < 0.001) and ALT/AST (p < 0.001) values in MO and MetS groups were significantly increased compared to the N-BMI group. On the other hand, DONMA-lab index values also differed between MO and MetS groups (p < 0.001). In the MO group, PLT was negatively correlated with MPV and PDW values. These correlations were not observed in the N-BMI group. None of the IR indices exhibited a correlation with PLT and PLT indices in the N-BMI group. HOMA-IR showed significant correlations both with PLT and PCT in the MO group. All of the three IR indices were well-correlated with each other in all groups. These findings point out the missing link between IR and PLT activation. In conclusion, PLT and PCT may be related to IR in addition to their identities as hemostasis markers during morbid obesity. Our findings have suggested that DONMA-lab index appears as the best surrogate marker for IR due to its discriminative feature between morbid obesity and MetS.

Keywords: children, insulin resistance, metabolic syndrome, plateletcrit, platelet indices

Procedia PDF Downloads 85
83 Characterization of a Newfound Manganese Tungstate Mineral of Hübnerite in Turquoise Gemstone from Miduk Mine, Kerman, Iran

Authors: Zahra Soleimani Rad, Fariborz Masoudi, Shirin Tondkar

Abstract:

Turquoise is one of the most well-known gemstones in Iran. The mineralogy, crystallography, and gemology of Shahr-e-Babak turquoise in Kerman were investigated and the results are presented in this research. The Miduk porphyry copper deposit is positioned in the Shahr-Babak area in Kerman province, Iran. This deposit is located 85 km NW of the Sar-Cheshmeh porphyry copper deposit. Preliminary mineral exploration was carried out from 1967 to 1970. So far, more than fifty diamond drill holes, each reaching a maximum depth of 1013 meters, have provided evidence supporting the presence of significant and promising porphyry copper mineralization at the Miduk deposit. The mineral deposit harbors a quantity of 170 million metric tons of ore, characterized by a mean composition of 0.86% copper (Cu), 0.007% molybdenum (Mo), 82 parts-per-billion gold (Au), and 1.8 parts-per-million silver (Ag). The Supergene enrichment layer, which constitutes the predominant source of copper ore, exhibits an approximate thickness of 50 meters. Petrography shows that the texture is homogeneous. In terms of a gemstone, greasy luster and blue color are seen, and samples are similar to what is commonly known as turquoise. The geometric minerals were detected in XRD analysis by analyzing the data using the x-pert software. From the mineralogical point of view; the turquoise gemstones of Miduk of Kerman consist of turquoise, quartz, mica, and hübnerite. In this article, to our best knowledge, we are stating the hübnerite mineral identified and seen in the Persian turquoise. Based on the obtained spectra, the main mineral of the Miduk samples from the six members of the turquoise family is the turquoise type with identical peaks that can be used as a reference for identification of the Miduk turquoise. This mineral is structurally composed of phosphate units, units of Al, Cu, water, and hydroxyl units, and does not include a Fe unit. In terms of gemology, the quality of a gemstone depends on the quantity of the turquoise phase and the amount of Cu in it according to SEM and XRD analysis.

Keywords: turquoise, hübnerite, XRD analysis, Miduk, Kerman, Iran

Procedia PDF Downloads 43
82 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 121
81 Impacto of Communism Policy on Religion Identity in Pogradec District, Albania

Authors: Gjergji Buzo

Abstract:

This paper presents the communist policy about tangible and intangible religious heritage in Pogradec District, Albania. The district of Pogradec lies in the southeast of Albania and consists of the municipality, located on the shore of Ohrid Lake, and 7 Administrative Units, with a population of about 61,530 inhabitants. From the statistical data provided by the Institute of Statistics, the city of Pogradec has 55.9% Muslims, 19.9% Orthodox, 1.4% Catholic and 1.1% Bektashi. While the religious affiliation in the Administrative Unit is as follows: Muslim 72.1%, Orthodox 3.32%, Catholic 1.18%, Bektashi 0.2%. The percentages are approximate values, taking into consideration that 13.8% of the total population preferred not to answer the question on religion and that for 2.4% of the persons who answered, the information provided was not relevant or stated. The percentage of the persons who declared themselves as believers without belonging to any religion was 5.5 and the persons who declared themselves as a non-believer and not belonging to any religion was 2.5. Number of persons who declared themselves as evangelists was 0.1% and the number of them declared as "other Christians" was 0.1%. About 80% of the population believe in God, and most of them practice one of the monotheist religions. We have divided religious practices into three major periods. The first is until 1967, when different religions were practiced in Pogradec in harmony with each other; the second is the period 1967-1990, during which the practice of religion was prohibited, and the period after 1990, when religious freedom was restored. This article is focused on the communist period 1967-1990 when Albania (and Pogradec as part of it) became the only atheist country in the world. The object of the study is the impact of these policies on spiritual and material religious identity. The communist regime destroyed or transformed the religious objects, whether Islamic or Christian and prohibited practicing religious rituals in Albania. They followed an education policy with an atheistic spirituality among young people, characterizing religion as opium for the people. All these left traces on the people and brought a deformation of the religious identity. In order to better understand the reality of that time and how this policy was experienced by the people, we conducted a survey in Pogradect District with the participation of 1000 people.

Keywords: communism policy, heritage, identity, religion, statistics, survey

Procedia PDF Downloads 50
80 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations

Authors: Alex Fedoseyev

Abstract:

This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.

Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel

Procedia PDF Downloads 43
79 Interactive Glare Visualization Model for an Architectural Space

Authors: Florina Dutt, Subhajit Das, Matthew Swartz

Abstract:

Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.

Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis

Procedia PDF Downloads 331
78 Autophagy Suppresses Bladder Tumor Formation in a Mouse Orthotopic Bladder Tumor Formation Model

Authors: Wan-Ting Kuo, Yi-Wen Liu, Hsiao-Sheng Liu

Abstract:

Annual incidence of bladder cancer increases in the world and occurs frequently in the male. Most common type is transitional cell carcinoma (TCC) which is treated by transurethral resection followed by intravesical administration of agents. In clinical treatment of bladder cancer, chemotherapeutic drugs-induced apoptosis is always used in patients. However, cancers usually develop resistance to chemotherapeutic drugs and often lead to aggressive tumors with worse clinical outcomes. Approximate 70% TCC recurs and 30% recurrent tumors progress to high-grade invasive tumors, indicating that new therapeutic agents are urgently needed to improve the successful rate of overall treatment. Nonapoptotic program cell death may assist to overcome worse clinical outcomes. Autophagy which is one of the nonapoptotic pathways provides another option for bladder cancer patients. Autophagy is reported as a potent anticancer therapy in some cancers. First of all, we established a mouse orthotopic bladder tumor formation model in order to create a similar tumor microenvironment. IVIS system and micro-ultrasound were utilized to noninvasively monitor tumor formation. In addition, we carried out intravesical treatment in our animal model to be consistent with human clinical treatment. In our study, we carried out intravesical instillation of the autophagy inducer in mouse orthotopic bladder tumor to observe tumor formation by noninvasive IVIS system and micro-ultrasound. Our results showed that bladder tumor formation is suppressed by the autophagy inducer, and there are no significant side effects in the physiology of mice. Furthermore, the autophagy inducer upregulated autophagy in bladder tissues of the treated mice was confirmed by Western blot, immunohistochemistry, and immunofluorescence. In conclusion, we reveal that a novel autophagy inducer with low side effects suppresses bladder tumor formation in our mouse orthotopic bladder tumor model, and it provides another therapeutic approach in bladder cancer patients.

Keywords: bladder cancer, transitional cell carcinoma, orthotopic bladder tumor formation model, autophagy

Procedia PDF Downloads 157
77 Study on Accurate Calculation Method of Model Attidude on Wind Tunnel Test

Authors: Jinjun Jiang, Lianzhong Chen, Rui Xu

Abstract:

The accurate of model attitude angel plays an important role on the aerodynamic test results in the wind tunnel test. The original method applies the spherical coordinate system transformation to obtain attitude angel calculation.The model attitude angel is obtained by coordinate transformation and spherical surface mapping applying the nominal attitude angel (the balance attitude angel in the wind tunnel coordinate system) indicated by the mechanism. First, the coordinate transformation of this method is not only complex but also difficult to establish the transformed relationship between the space coordinate systems especially after many steps of coordinate transformation, moreover it cannot realize the iterative calculation of the interference relationship between attitude angels; Second, during the calculate process to solve the problem the arc is approximately used to replace the straight line, the angel for the tangent value, and the inverse trigonometric function is applied. Therefore, in the calculation of attitude angel, the process is complex and inaccurate, which can be solved approximately when calculating small attack angel. However, with the advancing development of modern aerodynamic unsteady research, the aircraft tends to develop high or super large attack angel and unsteadyresearch field.According to engineering practice and vector theory, the concept of vector angel coordinate systemis proposed for the first time, and the vector angel coordinate system of attitude angel is established.With the iterative correction calculation and avoiding the problem of approximate and inverse trigonometric function solution, the model attitude calculation process is carried out in detail, which validates that the calculation accuracy and accuracy of model attitude angels are improved.Based on engineering and theoretical methods, a vector angel coordinate systemis established for the first time, which gives the transformation and angel definition relations between different flight attitude coordinate systems, that can accurately calculate the attitude angel of the corresponding coordinate systemand determine its direction, especially in the channel coupling calculation, the calculation of the attitude angel between the coordinate systems is only related to the angel, and has nothing to do with the change order s of the coordinate system, whichsimplifies the calculation process.

Keywords: attitude angel, angel vector coordinate system, iterative calculation, spherical coordinate system, wind tunnel test

Procedia PDF Downloads 102