Search results for: space velocity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5196

Search results for: space velocity

1116 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast

Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi

Abstract:

Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.

Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature

Procedia PDF Downloads 277
1115 Subclinical Renal Damage Induced by High-Fat Diet in Young Rats

Authors: Larissa M. Vargas, Julia M. Sacchi, Renata O. Pereira, Lucas S. Asano, Iara C. Araújo, Patricia Fiorino, Vera Farah

Abstract:

The aim of this study was to evaluate the occurrence of subclinical organ injuries induced by high-fat diet. Male wistar rats (n=5/group) were divided in control diet group (CD), commercial rat chow, and hyperlipidic diet (30% lipids) group (HD) administrated during 8 weeks, starting after weaning. All the procedures followed the rules of the Committee of Research and Ethics of the Mackenzie University (CEUA Nº 077/03/2011). At the end of protocol the animals were euthanized by anesthesia overload and the left kidney was removed. Intrarenal lipid deposition was evaluated by histological analyses with oilred. Kidney slices were stained with picrosirius red to evaluate the area of the Bowman's capsule (AB) and space (SB), and glomerular tuft area (GT). The renal expression of sterol regulatory element–binding protein (SREBP-2) was performed by Western Blotting. Creatinine concentration (serum and urine) and lipid profile were determined by colorimetric kit (Labtest). At the end of the protocol there was no differences in body weight between the groups, however the HD showed a marked increase in lipid deposits, glomeruli and tubules, and biochemical analysis for cholesterol and triglycerides. Moreover, in the kidney, the high-fat diet induced a reduction in the AB (13%), GT (18%) and SB (17%) associated with a reduction in glomerular filtration rate (creatinine clearance). The renal SRBP2 expression was increased in HD group. These data suggests that consumption of high-fat diet starting in childhood is associated with subclinical renal damage and function.

Keywords: high-fat diet, kidney, intrarenal lipid deposition, SRBP2

Procedia PDF Downloads 297
1114 Disrupting Patriarchy: Transforming Gender Oppression through Dialogue between Women and Men at a South African University

Authors: S. van Schalkwyk

Abstract:

On international levels and across disciplines gender scholars have argued that patriarchal scripts of masculinity and femininity are harmful as they negatively impact constructions of selfhood and relations between women and men. Patriarchal ideologies serve as a scaffolding for dominance and subordination and fuel violence against women. Toxic masculinity—social discourses of men as violent, unemotional, and sexually dominant—are embedded in South African culture and are rooted in the high rates of gender violence occurring in the country. Finding strategies that can open up space for the interrogation of toxic masculinity is crucial in order to disrupt the destructive consequences of patriarchy in educational and social contexts. The University of the Free State (UFS) in South Africa in collaboration with the non-profit organization Gender Reconciliation International conducted a year-long series of workshops with male and female students. The aim of these workshops was to facilitate healing between men and women through collective dialogue processes. Drawing on a collective biography methodology outlined by feminist poststructuralists, this paper explores the impact of these workshops on gender relations. Findings show that the students experienced significant psychological connections with others during these dialogues, through which they began to interrogate their own gendered conditioning and harmful patriarchal assumptions and practices. This paper enhances insights into the possibilities for disrupting patriarchy in South African universities through feminist collective research efforts.

Keywords: collective biography methodology, South Africa, toxic masculinity, transforming gender oppression, violence against women

Procedia PDF Downloads 479
1113 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System

Authors: Deyu Zhou, Xiao Xue, Lizhen Cui

Abstract:

With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.

Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks

Procedia PDF Downloads 78
1112 Removal of Na₂SO₄ by Electro-Confinement on Nanoporous Carbon Membrane

Authors: Jing Ma, Guotong Qin

Abstract:

We reported electro-confinement desalination (ECMD), a desalination method combining electric field effects and confinement effects using nanoporous carbon membranes as electrode. A carbon membrane with average pore size of 8.3 nm was prepared by organic sol-gel method. The precursor of support was prepared by curing porous phenol resin tube. Resorcinol-formaldehyde sol was coated on porous tubular resin support. The membrane was obtained by carbonisation of coated support. A well-combined top layer with the thickness of 35 μm was supported by macroporous support. Measurements of molecular weight cut-off using polyethylene glycol showed the average pore size of 8.3 nm. High salt rejection can be achieved because the water molecules need not overcome high energy barriers in confined space, while huge inherent dehydration energy was required for hydrated ions to enter the nanochannels. Additionally, carbon membrane with additional electric field can be used as an integrated membrane electrode combining the effects of confinement and electric potential gradient. Such membrane electrode can repel co-ions and attract counter-ions using pressure as the driving force for mass transport. When the carbon membrane was set as cathode, the rejection of SO₄²⁻ was 94.89%, while the removal of Na⁺ was less than 20%. We set carbon membrane as anode chamber to treat the effluent water from the cathode chamber. The rejection of SO₄²⁻ and Na⁺ reached to 100% and 88.86%, respectively. ECMD will be a promising energy efficient method for salt rejection.

Keywords: nanoporous carbon membrane, confined effect, electric field, desalination, membrane reactor

Procedia PDF Downloads 124
1111 Integration of GIS with Remote Sensing and GPS for Disaster Mitigation

Authors: Sikander Nawaz Khan

Abstract:

Natural disasters like flood, earthquake, cyclone, volcanic eruption and others are causing immense losses to the property and lives every year. Current status and actual loss information of natural hazards can be determined and also prediction for next probable disasters can be made using different remote sensing and mapping technologies. Global Positioning System (GPS) calculates the exact position of damage. It can also communicate with wireless sensor nodes embedded in potentially dangerous places. GPS provide precise and accurate locations and other related information like speed, track, direction and distance of target object to emergency responders. Remote Sensing facilitates to map damages without having physical contact with target area. Now with the addition of more remote sensing satellites and other advancements, early warning system is used very efficiently. Remote sensing is being used both at local and global scale. High Resolution Satellite Imagery (HRSI), airborne remote sensing and space-borne remote sensing is playing vital role in disaster management. Early on Geographic Information System (GIS) was used to collect, arrange, and map the spatial information but now it has capability to analyze spatial data. This analytical ability of GIS is the main cause of its adaption by different emergency services providers like police and ambulance service. Full potential of these so called 3S technologies cannot be used in alone. Integration of GPS and other remote sensing techniques with GIS has pointed new horizons in modeling of earth science activities. Many remote sensing cases including Asian Ocean Tsunami in 2004, Mount Mangart landslides and Pakistan-India earthquake in 2005 are described in this paper.

Keywords: disaster mitigation, GIS, GPS, remote sensing

Procedia PDF Downloads 479
1110 Analyzing the Ergonomic Design of Manual Material Handling in Chemical Industry: Case Study of Activity Task Weigh Liquid Catalyst to the Container Storage

Authors: Yayan Harry Yadi, L. Meily Kurniawidjaja

Abstract:

Work activities for MMH (Manual Material Handling) in the storage of liquid catalyst raw material workstations in chemical industries identify high-risk MSDs (Musculoskeletal Disorders). Their work is often performed frequently requires an awkward body posture, twisting, bending because of physical space limited, cold, slippery, and limited tools for transfer container and weighing the liquid chemistry of the catalyst into the container. This study aims to develop an ergonomic work system design on the transfer and weighing process of liquid catalyst raw materials at the storage warehouse. A triangulation method through an interview, observation, and detail study team with assessing the level of risk work posture and complaints. Work postures were analyzed using the RULA method, through the support of CATIA software. The study concludes that ergonomic design can make reduce 3 levels of risk scores awkward posture. CATIA Software simulation provided a comprehensive solution for a better posture of manual material handling at task weigh. An addition of manual material handling tools such as adjustable conveyors, trolley and modification tools semi-mechanical weighing with techniques based on rule ergonomic design can reduce the hazard of chemical fluid spills.

Keywords: ergonomic design, MSDs, CATIA software, RULA, chemical industry

Procedia PDF Downloads 162
1109 Novel Urban Regulation Panorama in Latin America

Authors: Yeimis Milton, Palomino Pichihua

Abstract:

The city, like living organisms, originates from codes, structured information in the form of rules that condition the physical form and performance of urban space. Usually, the so-called urban codes clash with the spontaneous nature of the city, with the urban Kháos that contextualizes the free creation (poiesis) of human collectives. This contradiction is especially evident in Latin America, which, like other developing regions, lacks adequate instruments to guide urban growth. Thus constructing a hybrid between the formal and informal city, categories that are difficult to separate one from the other. This is a comparative study focusing on the urban codes created to address the pandemic. The objective is to build an overview of these innovations in the region. The sample is made up of official norms published in pandemic, directly linked to urban planning and building control (urban form). The countries analyzed are Brazil, Mexico, Argentina, Peru, Colombia, and Chile. The study uncovers a shared interest in facing future urban problems, in contrast to the inconsistency of proposed legal instruments. Factors such as the lack of articulation, validity time, and ambiguity, among others, accentuate this problem. Likewise, it evidences that the political situation of each country has a significant influence on the development of these norms and the possibility of their long-term impact. In summary, the global emergency has produced opportunities to transform urban systems from their internal rules; however, there are very few successful examples in this field. Therefore, Latin American cities have the task of learning from this defeat in order to lay the foundations for a more resilient and sustainable urban future.

Keywords: pandemic, regulation, urban planning, latin America

Procedia PDF Downloads 99
1108 Clouds Influence on Atmospheric Ozone from GOME-2 Satellite Measurements

Authors: S. M. Samkeyat Shohan

Abstract:

This study is mainly focused on the determination and analysis of the photolysis rate of atmospheric, specifically tropospheric, ozone as function of cloud properties through-out the year 2007. The observational basis for ozone concentrations and cloud properties are the measurement data set of the Global Ozone Monitoring Experiment-2 (GOME-2) sensor on board the polar orbiting Metop-A satellite. Two different spectral ranges are used; ozone total column are calculated from the wavelength window 325 – 335 nm, while cloud properties, such as cloud top height (CTH) and cloud optical thick-ness (COT) are derived from the absorption band of molecular oxygen centered at 761 nm. Cloud fraction (CF) is derived from measurements in the ultraviolet, visible and near-infrared range of GOME-2. First, ozone concentrations above clouds are derived from ozone total columns, subtracting the contribution of stratospheric ozone and filtering those satellite measurements which have thin and low clouds. Then, the values of ozone photolysis derived from observations are compared with theoretical modeled results, in the latitudinal belt 5˚N-5˚S and 20˚N - 20˚S, as function of CF and COT. In general, good agreement is found between the data and the model, proving both the quality of the space-borne ozone and cloud properties as well as the modeling theory of ozone photolysis rate. The found discrepancies can, however, amount to approximately 15%. Latitudinal seasonal changes of photolysis rate of ozone are found to be negatively correlated to changes in upper-tropospheric ozone concentrations only in the autumn and summer months within the northern and southern tropical belts, respectively. This fact points to the entangled roles of temperature and nitrogen oxides in the ozone production, which are superimposed on its sole photolysis induced by thick and high clouds in the tropics.

Keywords: cloud properties, photolysis rate, stratospheric ozone, tropospheric ozone

Procedia PDF Downloads 211
1107 Inversion of the Spectral Analysis of Surface Waves Dispersion Curves through the Particle Swarm Optimization Algorithm

Authors: A. Cerrato Casado, C. Guigou, P. Jean

Abstract:

In this investigation, the particle swarm optimization (PSO) algorithm is used to perform the inversion of the dispersion curves in the spectral analysis of surface waves (SASW) method. This inverse problem usually presents complicated solution spaces with many local minima that make difficult the convergence to the correct solution. PSO is a metaheuristic method that was originally designed to simulate social behavior but has demonstrated powerful capabilities to solve inverse problems with complex space solution and a high number of variables. The dispersion curve of the synthetic soils is constructed by the vertical flexibility coefficient method, which is especially convenient for soils where the stiffness does not increase gradually with depth. The reason is that these types of soil profiles are not normally dispersive since the dominant mode of Rayleigh waves is usually not coincident with the fundamental mode. Multiple synthetic soil profiles have been tested to show the characteristics of the convergence process and assess the accuracy of the final soil profile. In addition, the inversion procedure is applied to multiple real soils and the final profile compared with the available information. The combination of the vertical flexibility coefficient method to obtain the dispersion curve and the PSO algorithm to carry out the inversion process proves to be a robust procedure that is able to provide good solutions for complex soil profiles even with scarce prior information.

Keywords: dispersion, inverse problem, particle swarm optimization, SASW, soil profile

Procedia PDF Downloads 184
1106 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 258
1105 Analysis of Tourism Development Level and Research on Improvement Strategies- Take Chongqing as an Example

Authors: Jiajun Lu, Yun Ma

Abstract:

As a member of the tertiary industry, tourism is an important driving factor for urban economic development. As a well-known tourist city in China, according to statistics, the added value of tourism and related industries in 2022 will reach 106.326 billion yuan, a year-on-year increase of 1.2%, accounting for 3.7% of the city's GDP. However, the overall tourism development level of Chongqing is seriously unbalanced, and the tourism strength of the main urban area is much higher than that of the southeast Chongqing, northeast Chongqing and the surrounding city tourism area, and the overall tourism strength of the other three regions is relatively balanced. Based on the estimation of tourism development level and the geographic detector method, this paper finds that the important factors affecting the tourism development level of non-main urban areas in Chongqing are A-level tourist attractions. Through GIS geospatial analysis technology and SPSS data correlation research method, the spatial distribution characteristics and influencing factors of A-level tourist attractions in Chongqing were quantitatively analyzed by using data such as geospatial data cloud, relevant documents of Chongqing Municipal Commission of Culture and Tourism Development, planning cloud, and relevant statistical yearbooks. The results show that: (1) The spatial distribution of tourist attractions in non-main urban areas of Chongqing is agglomeration and uneven. (2) The spatial distribution of A-level tourist attractions in non-main urban areas of Chongqing is affected by ecological factors, and the degree of influence is in the order of water factors> topographic factors > green space factors.

Keywords: tourist attractions, geographic detectors, quantitative research, ecological factors, GIS technology, SPSS analysis

Procedia PDF Downloads 5
1104 Synthesis, Characterization and Catecholase Study of Novel Bidentate Schiff Base Derived from Dehydroacetic Acid

Authors: Salima Tabti, Chaima Maouche, Tinhinene Louaileche, Amel Djedouani, Ismail Warad

Abstract:

Novel Schiff base ligand HL has been synthesized by condensation of aromatic amine and DHA. It was characterized by UV-Vis, FT-IR, SM, NMR (1H, 13C) and also by single-crystal X-ray diffraction. The crystal structure shows that compound crystallized in a triclinic system in P-1 space group and with a two unit per cell (Z = 2).The asymmetric unit, contains one independent molecules, the conformation is determined by an intermolecular N-H…O hydrogen bond with an S(6) ring motif. The molecule have an (E) conformation about the C=N bond. The dihedral angles between the phenyl and pyran ring planes is 89.37 (1), the two plans are approximately perpendicular. The catecholase activity of is situ copper complexes of this ligand has been investigated against catechol. The progress of the oxidation reactions was closely monitored over time following the strong peak of catechol using UV-Vis. Oxidation rates were determined from the initial slope of absorbance. time plots, then analyzed by Michaelis-Menten equations. Catechol oxidation reactions were realized using different concentrations of copper acetate and ligand (L/Cu: 1/1, 1/2, 2/1). The results show that all complexes were able to catalyze the oxidation of catechol. Acetate complexes have the highest activity. Catalysis is a branch of chemical kinetics that, more generally, studies the influence of all physical or chemical factors determining reaction rates. It solves a lot of problems in the chemistry reaction process, especially for a green, economic and less polluting chemistry. For this reason, the search for new catalysts for known organic reactions, occupies a very advanced place in the themes proposed by the chemists.

Keywords: dehydroacetic acid, catechol, copper, catecholase activity, x-ray

Procedia PDF Downloads 108
1103 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 151
1102 Increasing Sustainability Using the Potential of Urban Rivers in Developing Countries with a Biophilic Design Approach

Authors: Mohammad Reza Mohammadian, Dariush Sattarzadeh, Mir Mohammad Javad Poor Hadi Hosseini

Abstract:

Population growth, urban development and urban buildup have disturbed the balance between the nature and the city, and so leading to the loss of quality of sustainability of proximity to rivers. While in the past, the sides of urban rivers were considered as urban green space. Urban rivers and their sides that have environmental, social and economic values are important to achieve sustainable development. So far, efforts have been made at various scales in various cities around the world to revitalize these areas. On the other hand, biophilic design is an innovative design approach in which attention to natural details and relation to nature is a fundamental concept. The purpose of this study is to provide an integrated framework of urban design using the potential of urban rivers (in order to increase sustainability) with a biophilic design approach to be used in cities in developing countries. The methodology of the research is based on the collection of data and information from research and projects including a study on biophilic design, investigations and projects related to the urban rivers, and a review of the literature on sustainable urban development. Then studying the boundary of urban rivers is completed by examining case samples. Eventually, integrated framework of urban design, to design the boundaries of urban rivers in the cities of developing countries is presented regarding the factors affecting the design of these areas. The result shows that according to this framework, the potential of the river banks is utilized to increase not only the environmental sustainability but also social, economic and physical stability with regard to water, light, and the usage of indigenous materials, etc.

Keywords: urban rivers, biophilic design, urban sustainability, nature

Procedia PDF Downloads 287
1101 Effects of Planned Pre-laboratory Discussion on Physics Students’ Acquisition of Science Process Skills in Kontagora, Niger State

Authors: Akano Benedict Ubawuike

Abstract:

This study investigated the effects of pre-laboratory discussion on physics students’ acquisition of science process skills. The study design was quasi-experimental and purposive sampling technique was applied in selecting two schools in Kontagora Town for the research based on the availability of a good physics laboratory. Intact classes already grouped by the school for the sake of small laboratory space and equipment, comprising Thirty (30) students, 15 for experimental group in School A and 15 for control in school B were the subjects for the research. The instrument used for data collection was the lesson prepared for pre – practical discussion and researcher made Science Process Skill Test (SPST ) and two (2) research questions, and two (2) research hypotheses were developed to guide the study. The data collected were analyzed using means and t-Test statistics at 0.05 level of significance. The study revealed that pre-laboratory discussion was found to be more efficacious in enhancing students’ acquisition of science process skills. It also revealed that gender, had no significant effect on students’ acquisition of science process skills. Based on the findings, it was recommended among others that teachers should encourage students to develop interest in practical activities by engaging them in pre-laboratory discussion and providing instructional materials that will challenge them to be actively involved during practical lessons. It is also recommended that Ministries of Education and professional organizations like Science Teachers' Association of Nigeria (STAN) should organize workshops, seminars and conferences for physics teachers and Physics concepts should be taught with practical activity so that the students will do science instead of learning about science.

Keywords: physics, laboratory, discussion, students, acquisition, science process skills

Procedia PDF Downloads 130
1100 On the Effect of Carbon on the Efficiency of Titanium as a Hydrogen Storage Material

Authors: Ghazi R. Reda Mahmoud Reda

Abstract:

Among the metal that forms hydride´s, Mg and Ti are known as the most lightweight materials; however, they are covered with a passive layer of oxides and hydroxides and require activation treatment under high temperature ( > 300 C ) and hydrogen pressure ( > 3 MPa) before being used for storage and transport applications. It is well known that small graphite addition to Ti or Mg, lead to a dramatic change in the kinetics of mechanically induced hydrogen sorption ( uptake) and significantly stimulate the Ti-Hydrogen interaction. Many explanations were given by different authors to explain the effect of graphite addition on the performance of Ti as material for hydrogen storage. Not only graphite but also the addition of a polycyclic aromatic compound will also improve the hydrogen absorption kinetics. It will be shown that the function of carbon addition is two-fold. First carbon acts as a vacuum cleaner, which scavenges out all the interstitial oxygen that can poison or slow down hydrogen absorption. It is also important to note that oxygen favors the chemisorption of hydrogen, which is not desirable for hydrogen storage. Second, during scavenging of the interstitial oxygen, the carbon reacts with oxygen in the nano and microchannel through a highly exothermic reaction to produce carbon dioxide and monoxide which provide the necessary heat for activation and thus in the presence of carbon lower heat of activation for hydrogen absorption which is observed experimentally. Furthermore, the product of the reaction of hydrogen with the carbon oxide will produce water which due to ball milling hydrolyze to produce the linear H5O2 + this will reconstruct the primary structure of the nanocarbon to form secondary structure, where the primary structure (a sheet of carbon) are connected through hydrogen bonding. It is the space between these sheets where physisorption or defect mediated sorption occurs.

Keywords: metal forming hydrides, polar molecule impurities, titanium, phase diagram, hydrogen absorption

Procedia PDF Downloads 360
1099 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 132
1098 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 142
1097 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications

Authors: Aymen Laadhari

Abstract:

The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.

Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell

Procedia PDF Downloads 251
1096 A Case of Survival with Self-Draining Haemopericardium Secondary to Stabbing

Authors: Balakrishna Valluru, Ruth Suckling

Abstract:

A 16 year old male was found collapsed on the road following stab injuries to the chest and abdomen and was transported to the emergency department by ambulance. On arrival in the emergency department the patient was breathless and appeared pale. He was maintaining his airway with spontaneous breathing and had a heart rate of 122 beats per minute with a blood pressure of 83/63 mmHg. He was resuscitated initially with three units of packed red cells. Clinical examination identified three incisional wounds each measuring 2 cm. These were in the left para-sternal region, right infra-scapular region and left upper quadrant of the abdomen. The chest wound over the left parasternal area at the level of 4tth intercostal space was bleeding intermittently on leaning forwards and was relieving his breathlessness intermittently. CT imaging was performed to characterize his injuries and determine his management. CT scan of chest and abdomen showed moderate size haemopericardium with left sided haemopneumothorax. The patient underwent urgent surgical repair of the left ventricle and left anterior descending artery. He recovered without complications and was discharged from the hospital. This case highlights the fact that the potential to develop a life threatening cardiac tamponade was mitigated by the left parasternal stab wound. This injury fortuitously provided a pericardial window through which the bleeding from the injured left ventricle and left anterior descending artery could drain into the left hemithorax providing an opportunity for timely surgical intervention to repair the cardiac injuries.

Keywords: stab, incisional, haemo-pericardium, haemo-pneumothorax

Procedia PDF Downloads 200
1095 Architecture for QoS Based Service Selection Using Local Approach

Authors: Gopinath Ganapathy, Chellammal Surianarayanan

Abstract:

Services are growing rapidly and generally they are aggregated into a composite service to accomplish complex business processes. There may be several services that offer the same required function of a particular task in a composite service. Hence a choice has to be made for selecting suitable services from alternative functionally similar services. Quality of Service (QoS)plays as a discriminating factor in selecting which component services should be selected to satisfy the quality requirements of a user during service composition. There are two categories of approaches for QoS based service selection, namely global and local approaches. Global approaches are known to be Non-Polynomial (NP) hard in time and offer poor scalability in large scale composition. As an alternative to global methods, local selection methods which reduce the search space by breaking up the large/complex problem of selecting services for the workflow into independent sub problems of selecting services for individual tasks are coming up. In this paper, distributed architecture for selecting services based on QoS using local selection is presented with an overview of local selection methodology. The architecture describes the core components, namely, selection manager and QoS manager needed to implement the local approach and their functions. Selection manager consists of two components namely constraint decomposer which decomposes the given global or workflow level constraints in local or task level constraints and service selector which selects appropriate service for each task with maximum utility, satisfying the corresponding local constraints. QoS manager manages the QoS information at two levels namely, service class level and individual service level. The architecture serves as an implementation model for local selection.

Keywords: architecture of service selection, local method for service selection, QoS based service selection, approaches for QoS based service selection

Procedia PDF Downloads 425
1094 Evaluating the Relationship between Neighbourhood Satisfaction and Urban Safety: The Case Study of Riverwood, Sydney

Authors: Samaneh Arasteh

Abstract:

Neighbourhood satisfaction and safety are the two main components of urban life and have a substantial impact on residents’ quality of life. The relationship between these two components, especially in areas surrounding our individual private dwellings, is highly influential on many social, economic, and wellbeing activities that may benefit neighbourhood residents. Neighbourhood and urban design – which are liable to be affected by the perceived quality of local public spaces – are likely to be significant factors influencing broader residents’ feelings of safety. With this in mind, this study reviews recent normative literature on how these design processes have influenced neighbourhood satisfaction including perceived safety with a focus on different aspects of public spaces including planning, management, and design in a mix-tenure neighbourhood. Following the study aim, Riverwood in Sydney’s southwest was chosen as a case study to gain a detailed understanding of the context by engaging with community members, residents, non-government organisations, and experts. Moreover, archival studies on neighbourhood satisfaction and safety, expert interviews, and resident questionnaires are presented to shed light on the relationship between neighbourhood satisfaction and perception of safety. The study argues that for the safer neighbourhood in urban areas, social-cultural factors need to be aligned toward strengthening physical factors and since making the environments safer, it is important to understand practical and achievable mechanisms which are required to improve existing estates. Findings show that increasing the clarity of community social and physical environmental involvements can promote residents’ feelings of safety and following neighbourhood satisfaction.

Keywords: neighbourhood satisfaction, public space, Riverwood, urban safety

Procedia PDF Downloads 181
1093 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study

Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan

Abstract:

Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.

Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation

Procedia PDF Downloads 223
1092 Energy-Saving Methods and Principles of Energy-Efficient Concept Design in the Northern Hemisphere

Authors: Yulia A. Kononova, Znang X. Ning

Abstract:

Nowadays, architectural development is getting faster and faster. Nevertheless, modern architecture often does not meet all the points, which could help our planet to get better. As we know, people are spending an enormous amount of energy every day of their lives. Because of the uncontrolled energy usage, people have to increase energy production. As energy production process demands a lot of fuel sources, it courses a lot of problems such as climate changes, environment pollution, animals’ distinction, and lack of energy sources also. Nevertheless, nowadays humanity has all the opportunities to change this situation. Architecture is one of the most popular fields where it is possible to apply new methods of saving energy or even creating it. Nowadays we have kinds of buildings, which can meet new willing. One of them is energy effective buildings, which can save or even produce energy, combining several energy-saving principles. The main aim of this research is to provide information that helps to apply energy-saving methods while designing an environment-friendly building. The research methodology requires gathering relevant information from literature, building guidelines documents and previous research works in order to analyze it and sum up into a material that can be applied to energy-efficient building design. To mark results it should be noted that the usage of all the energy-saving methods applied to a design project of building results in ultra-low energy buildings that require little energy for space heating or cooling. As a conclusion it can be stated that developing methods of passive house design can decrease the need of energy production, which is an important issue that has to be solved in order to save planet sources and decrease environment pollution.

Keywords: accumulation, energy-efficient building, storage, superinsulation, passive house

Procedia PDF Downloads 262
1091 Impact of a Novel Technique of S-Shaped Tracheostoma in Pediatric Tracheostomy in Intensive Care Unit on Success and Procedure Related Complications

Authors: Devendra Gupta, Sushilk K. Agarwal, Amit Kesari, P. K. Singh

Abstract:

Objectives: Pediatric patients often may experience persistent respiratory failure that requires tracheostomy placement in Pediatric ICU. We have designed a technique of tracheostomy in pediatric patients with S-shaped incision on the tracheal wall with higher success rate and lower complication rate. Technique: Following general anesthesia and positioning of the patient, the trachea was exposed in midline by a vertical skin incision. In order to make S-shaped tracheostoma, second tracheal ring was identified. The conventional vertical incision was made in second tracheal ring and then extended at both its ends laterally in the inter-cartilaginous space parallel to the tracheal cartilage in the opposite direction to make the incision S-shaped. The trachea was dilated with tracheal dilator and appropriate size of tracheostomy tube was then placed into the trachea. Results: S-shaped tracheostomy was performed in 20 children with mean age of 6.25 years (age range is 2-7) requiring tracheostomy placement. The tracheostomy tubes were successfully placed in all the patients in single attempt. There was no incidence of significant intra-operative bleeding, subcutaneous emphysema, vocal cord palsy or pneumothorax. Two patients developed pneumonia and expired within a year. However, there was no incidence of tracheo-esophageal fistula, suprastomal collapse or difficulty in decannulation on one year of follow up related to our technique. One patient developed late trachietis managed conservatively. Conclusion: S-shaped tracheoplasty was associated with high success rate, reduced risk of the early and late complications in pediatric patients requiring tracheostomy.

Keywords: peatrics, tracheostomy, ICU, tracheostoma

Procedia PDF Downloads 263
1090 On Lie-Central Derivations and Almost Inner Lie-Derivations of Leibniz Algebras

Authors: Natalia Pacheco Rego

Abstract:

The Liezation functor is a map from the category of Leibniz algebras to the category of Lie algebras, which assigns a Leibniz algebra to the Lie algebra given by the quotient of the Leibniz algebra by the ideal spanned by the square elements of the Leibniz algebra. This functor is left adjoint to the inclusion functor that considers a Lie algebra as a Leibniz algebra. This environment fits in the framework of central extensions and commutators in semi-abelian categories with respect to a Birkhoff subcategory, where classical or absolute notions are relative to the abelianization functor. Classical properties of Leibniz algebras (properties relative to the abelianization functor) were adapted to the relative setting (with respect to the Liezation functor); in general, absolute properties have the corresponding relative ones, but not all absolute properties immediately hold in the relative case, so new requirements are needed. Following this line of research, it was conducted an analysis of central derivations of Leibniz algebras relative to the Liezation functor, called as Lie-derivations, and a characterization of Lie-stem Leibniz algebras by their Lie-central derivations was obtained. In this paper, we present an overview of these results, and we analyze some new properties concerning Lie-central derivations and almost inner Lie-derivations. Namely, a Leibniz algebra is a vector space equipped with a bilinear bracket operation satisfying the Leibniz identity. We define the Lie-bracket by [x, y]lie = [x, y] + [y, x] , for all x, y . The Lie-center of a Leibniz algebra is the two-sided ideal of elements that annihilate all the elements in the Leibniz algebra through the Lie-bracket. A Lie-derivation is a linear map which acts as a derivative with respect to the Lie-bracket. Obviously, usual derivations are Lie-derivations, but the converse is not true in general. A Lie-derivation is called a Lie-central derivation if its image is contained in the Lie-center. A Lie-derivation is called an almost inner Lie-derivation if the image of an element x is contained in the Lie-commutator of x and the Leibniz algebra. The main results we present in this talk refer to the conditions under which Lie-central derivation and almost inner Lie-derivations coincide.

Keywords: almost inner Lie-derivation, Lie-center, Lie-central derivation, Lie-derivation

Procedia PDF Downloads 133
1089 Media Representation of Romanian Migrants in the Italian Media: A Comparative Study

Authors: Paula-Catalina Meirosu

Abstract:

The economic migration (intra-EU) is a topic of debate in the public space in both countries of origin and countries of destination. Since the 1990s, after the collapse of communist regimes and then the accession of some former communist countries to the EU, the migratory flows of migrants (including Romanian migrants) to EU countries has been increased constantly. Italy is one of the main countries of destination among Romanians since at the moment Italy hosts more than one million Romanian migrants. Based on an interdisciplinary analytical framework focused on the theories in the field of transnationalism, media and migration studies and critical media analysis, this paper investigates the media construction of intra-EU economic migration in the Italian press from two main perspectives. The first point of view is the media representation of Romanian migrants in the Italian press in a specific context: the EU elections in 2014. The second one explores the way in which Romanian journalists use the media in the destinations countries (such as Italy) as a source to address the issue of migration. In this context, the paper focuses on online articles related to the Romanian migrants’ representation in the media before and during the EU elections in two newspapers (La Repubblica from Italy and Adevarul from Romania), published during January-May 2014. The methodology is based on a social-constructivist approach, predominantly discursive and includes elements of critical discourse analysis (CDA) to identify the patterns of Romanian migrants in the Italian press as well as strategies for building categories, identities, and roles of migrants. The aim of such an approach is to find out the dynamic of the media discourse on migration from a destination country in the light of a European electoral context (EU elections) and based on the results, to propose scenarios for the elections to be held this year.

Keywords: migration, media discourse, Romanian migrants, transnationalism

Procedia PDF Downloads 134
1088 “Lightyear” – The Battle for LGBTQIA+ Representation Behind Disney/Pixar’s Failed Blockbuster

Authors: Ema Vitória Fonseca Lavrador

Abstract:

In this work, we intend to explore the impact that the film "Lightyear" (2022) had on the social context of its production, distribution, and reception. This film, produced by Walt Disney Animation Studios and Pixar Animation Studios, depicts the story of Buzz Lightyear, a Space Ranger from which the character of the same name in the "Toy Story" film franchise is based. This prequel was predicted to be the blockbuster of the year, but it was a financial fiasco and the subject of numerous controversies, which also caused it to be drowned out by the film "Minions: The Rise of Gru" (2022). The reason for its failure is not based on the film's narrative or quality but on its controversial context for being a commitment to LGBTQIA+ representation in an unexpected way, by featuring a same-sex couple and showing a kiss shared by them. This representation cost Disney distribution in countries against LGBTQIA+ representation in media and involved Disney in major disagreements with fans and politicians, especially for being a direct opposition to the Florida House Bill 1557, also called the “Don't Say Gay” bill. Many major companies have taken a stand against this law because it jeopardizes the safety of the LGBTQIA+ community, and, although Disney initially cut the kiss off the film, pressure from the staff and audience resulted in unprecedented progress. For featuring a brief homosexual kiss, its exhibition was banned in several countries and discouraged by the same public that was previously the focus of Disney's attention, as this is a conservative “family-friendly” branded company. We believe it is relevant to study the case of "Lightyear" because it is a work that raises awareness and promotes representation of communities affected during the dark times while less legislation is being approved to protect the rights and safety of queer people.

Keywords: Don’t Say Gay” bill, gender stereotypes, LGBTQIA+ representation, lightyear, Disney/Pixar

Procedia PDF Downloads 81
1087 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: data security, flow cytometry, leukaemia, telematics platform, telemedicine

Procedia PDF Downloads 981