Search results for: weighted approximation
166 Microbiota Associated With the Larval Culture of Red Cusk Eel Genipterus Chilensis in Chile
Authors: Luz Hurtado, Rodrigo Rojas, Jaime Romero, Christopher Concha
Abstract:
The culture of the marine fish red cusk eel Genypterus chilensis is currently considered a priority for Chilean aquaculture which is a Chilean native species of high gastronomic demand and market value. The microbiota was analyzed in terms of diversity and structure using massive Illumina sequencing. The analysis of alpha diversity was performed in samples of G. chilensis larvae of 6, 18 and 32 dph (days post-hatching) and it was observed that there were significant differences (P = 0.05) between the days of culture for the Chao1 index, being the larvae of 18 dph the one with the highest index followed by the larvae of 6 dph, The lowest value for this index was presented in larvae of 32 dph. There were no significant differences in larvae between the days of culture for the Shannon (P=0.0857) and Simpson (P=0.0714) indices. In general, the larvae of G. chilensis have high rates of diversity. When analyzing the beta diversity, a differentiation between the bacterial communities is observed depending on the day of the culture of the larvae. Considering the PCoA elaborated from the unweighted UniFrac statistic, the explained variance was 46.2% (PC1 29.2% and PC2 17.0%) and in the case of the PCoA elaborated with the weighted UniFrac statistic; the explained variance was 65.5% (PC1 41.8% and PC2 23.7%) these differences were significant based on the Permanova statistical analysis (P= 0.002 and 0.037 respectively). When analyzing the taxonomic composition of the microbiota of the larvae in the different days of culture it was observed that at the phyla level the most abundant in the larvae of 6 dph were Proteobacteria (57%) Verrucomicrobia (24%) and Firmicutes (14%), for the larvae of 18 dph the predominant phyla were Proteobacteria (90%), Dependientiae (5%), Actinobacteria (2%) and Plactomyces (2%), for the larvae of 32 dph the phyla that presented the highest relative abundance were Proteobacteria (57%), Firmicutes (29%), Verrucomicrobia (5%) and Actinobacteria (5%), when comparing the larvae between the days it was observed that the phylum Proteobacteria was the most abundant in the samples of larvae of 6, 18 and 32 dph being the larvae of 18 dph those that present the highest relative abundance, the larvae of 6 dph were those that presented the highest relative abundance for the phylum Verrucomicrobia and in the larvae of 32 dph was observed greater abundance of the phylum Firmicutes compared to the other days of larval culture. At the level of genera, those with the highest relative abundance in larvae of 6 dph were Rubritalea (30%), Psychrobacter (28%), staphylococcus (17%) and Ralstonia (10%), for the larvae of 18 dph the genera with the highest abundance were Psychrobacter (47%), Litoreibacter (13%), Nautella (9%) and Cohesibacter (8%), for the larvae of 32 dph the most abundant genera were Alloiococcus (25%), Dialister (14%), Neptunomonas (13%) and Piscirickettsia (11%). When observing the taxonomic composition of the larvae between the days of larval culture, it is observed that there are differences between them.Keywords: microbiota, diversity, G. Chilensis, larvae
Procedia PDF Downloads 73165 Investigation of Heat Conduction through Particulate Filled Polymer Composite
Authors: Alok Agrawal, Alok Satapathy
Abstract:
In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite
Procedia PDF Downloads 321164 Evaluation of Different Waste Management Planning Strategies in an Industrial City
Authors: Leila H. Khiabani, Mohammadreza Vafaee, Farshad Hashemzadeh
Abstract:
Industrial waste management regulates different stages of production, storage, transfer, recycling and waste disposal. There are several common practices for industrial waste management. However, due to various local health, economic, social, environmental and aesthetic considerations, the most optimal principles and measures often vary at each specific industrial zone. In addition, waste management strategies are heavily impacted by local administrative, legal, and financial regulations. In this study, a hybrid qualitative and quantitative research methodology has been designed for waste management planning in an industrial city. Firstly, following a qualitative research methodology, the most relevant waste management strategies for the specific industrial city were identified through interviews with environmental planning and waste management experts. Forty experts participated in this study. Alborz industrial city in Iran, which hosts more than one thousand industrial units in nine hundred acres, was chosen as the sample industrial city in this study. The findings from the expert interviews at the first phase were then used to design a quantitative questionnaire for the second phase of the study. The aim of the questionnaire was to quantify the relative impact of different waste management strategies in the sample industrial city. Eight waste management strategies and three implementation policies were included in the questionnaire. The experts were asked to rank the relative effectiveness of each strategy for environmental planning of the sample industrial city. They were also asked to rank the relative effectiveness of each planning policy on each of the waste management strategies. In the end, the weighted average of all the responses was calculated to identify the most effective waste management strategy and planning policies for the sample industrial city. The results suggested that among the eight suggested waste management strategies, industrial composting is the most effective (31%) strategy based on the collective evaluation of the local expert. Additionally, the results suggested that the most effective policy (58%) in the city’s environmental planning is to reduce waste generation by prolonging the effective life of industrial products using higher quality and recyclable materials. These findings can provide useful expert guidelines for prioritization between different waste management strategies in the city’s overall environmental planning roadmap. The findings may also be applicable to similar industrial cities. In addition, a similar methodology can be utilized in the environmental planning of other industrial cities.Keywords: environmental planning, industrial city, quantitative research, waste management
Procedia PDF Downloads 132163 Policy Guidelines to Enhance the Mathematics Teachers’ Association of the Philippines (MTAP) Saturday Class Program
Authors: Roselyn Alejandro-Ymana
Abstract:
The study was an attempt to assess the MTAP Saturday Class Program along its eight components namely, modules, instructional materials, scheduling, trainer-teachers, supervisory support, administrative support, financial support and educational facilities, the results of which served as bases in developing policy guidelines to enhance the MTAP Saturday Class Program. Using a descriptive development method of research, this study involved the participation of twenty-eight (28) schools with MTAP Saturday Class Program in the Division of Dasmarinas City where twenty-eight school heads, one hundred twenty-five (125) teacher-trainer, one hundred twenty-five (125) pupil program participants, and their corresponding one hundred twenty-five (125) parents were purposively drawn to constitute the study’s respondent. A self-made validated survey questionnaire together with Pre and Post-Test Assessment Test in Mathematics for pupils participating in the program, and an unstructured interview guide was used to gather the data needed in the study. Data obtained from the instruments administered was organized and analyzed through the use of statistical tools that included the Mean, Weighted Mean, Relative Frequency, Standard Deviation, F-Test or One-Way ANOVA and the T-Test. Results of the study revealed that all the eight domains involved in the MTAP Saturday Class Program were practiced with the areas of 'trainer-teachers', 'educational facilities', and 'supervisory support' identified as the program’s strongest components while the areas of 'financial support', 'modules' and 'scheduling' as being the weakest program’s components. Moreover, the study revealed based on F-Test, that there was a significant difference in the assessment made by the respondents in each of the eight (8) domains. It was found out that the parents deviated significantly from the assessment of either the school heads or the teachers on the indicators of the program. There is much to be desired when it comes to the quality of the implementation of the MTAP Saturday Class Program. With most of the indicators of each component of the program, having received overall average ratings that were at least 0.5 point away from the ideal rating 5 for total quality, school heads, teachers, and supervisors need to work harder for total quality of the implementation of the MTAP Saturday Class Program in the division.Keywords: mathematics achievement, MTAP program, policy guidelines, program assessment
Procedia PDF Downloads 212162 Using ICESat-2 Dynamic Ocean Topography to Estimate Western Arctic Freshwater Content
Authors: Joshua Adan Valdez, Shawn Gallaher
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport, modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116km3/year across the Beaufort Gyre. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff, and is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity-driven pycnocline as opposed to the temperature-driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and dynamic ocean topography (DOT). In situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time-consuming. Utilizing NASA’s ICESat-2’s DOT remote sensing capabilities and Air Expendable CTD (AXCTD) data from the Seasonal Ice Zone Reconnaissance Surveys (SIZRS), a linear regression model between DOT and freshwater content is determined along the 150° west meridian. Freshwater content is calculated by integrating the volume of water between the surface and a depth with a reference salinity of ~34.8. Using this model, we compare interannual variability in freshwater content within the gyre, which could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non-in situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially demonstrate the value of remote sensing tools to reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: Cryosphere, remote sensing, Arctic oceanography, climate modeling, Ekman transport
Procedia PDF Downloads 77161 Prandtl Number Influence Analysis on Droplet Migration in Natural Convection Flow Using the Level Set Method
Authors: Isadora Bugarin, Taygoara F. de Oliveira
Abstract:
Multiphase flows have currently been placed as a key solution for technological advances in energy and thermal sciences. The comprehension of droplet motion and behavior on non-isothermal flows is, however, rather limited. The present work consists of an investigation of a 2D droplet migration on natural convection inside a square enclosure with differentially heated walls. The investigation in question concerns the effects on drop motion of imposing different combinations of Prandtl and Rayleigh numbers while defining the drop on distinct initial positions. The finite differences method was used to compute the Navier-Stokes and energy equations for a laminar flow, considering the Boussinesq approximation. Also, a high order level set method was applied to simulate the two-phase flow. A previous analysis developed by the authors had shown that for fixed values of Rayleigh and Prandtl, the variation of the droplet initial position at the beginning of the simulation delivered different patterns of motion, in which for Ra≥10⁴ the droplet presents two very specific behaviors: it can travel through a helical path towards the center or define cyclic circular paths resulting in closed paths when reaching the stationary regime. Now, when varying the Prandtl number for different Rayleigh regimes, it was observed that this particular parameter also affects the migration of the droplet, altering the motion patterns as its value is increased. On higher Prandtl values, the drop performs wider paths with larger amplitudes, traveling closer to the walls and taking longer time periods to finally reach the stationary regime. It is important to highlight that drastic drop behavior changes on the stationary regime were not yet observed, but the path traveled from the begging of the simulation until the stationary regime was significantly altered, resulting in distinct turning over frequencies. The flow’s unsteady Nusselt number is also registered for each case studied, enabling a discussion on the overall effects on heat transfer variations.Keywords: droplet migration, level set method, multiphase flow, natural convection in enclosure, Prandtl number
Procedia PDF Downloads 122160 Identification and Optimisation of South Africa's Basic Access Road Network
Authors: Diogo Prosdocimi, Don Ross, Matthew Townshend
Abstract:
Road authorities are mandated within limited budgets to both deliver improved access to basic services and facilitate economic growth. This responsibility is further complicated if maintenance backlogs and funding shortfalls exist, as evident in many countries including South Africa. These conditions require authorities to make difficult prioritisation decisions, with the effect that Road Asset Management Systems with a one-dimensional focus on traffic volumes may overlook the maintenance of low-volume roads that provide isolated communities with vital access to basic services. Given these challenges, this paper overlays the full South African road network with geo-referenced information for population, primary and secondary schools, and healthcare facilities to identify the network of connective roads between communities and basic service centres. This connective network is then rationalised according to the Gross Value Added and number of jobs per mesozone, administrative and functional road classifications, speed limit, and road length, location, and name to estimate the Basic Access Road Network. A two-step floating catchment area (2SFCA) method, capturing a weighted assessment of drive-time to service centres and the ratio of people within a catchment area to teachers and healthcare workers, is subsequently applied to generate a Multivariate Road Index. This Index is used to assign higher maintenance priority to roads within the Basic Access Road Network that provide more people with better access to services. The relatively limited incidence of Basic Access Roads indicates that authorities could maintain the entire estimated network without exhausting the available road budget before practical economic considerations get any purchase. Despite this fact, a final case study modelling exercise is performed for the Namakwa District Municipality to demonstrate the extent to which optimal relocation of schools and healthcare facilities could minimise the Basic Access Road Network and thereby release budget for investment in roads that best promote GDP growth.Keywords: basic access roads, multivariate road index, road prioritisation, two-step floating catchment area method
Procedia PDF Downloads 231159 Characterization and Modelling of Aerosol Droplet in Absorption Columns
Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem. Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation
Procedia PDF Downloads 246158 Emergency Physician Performance for Hydronephrosis Diagnosis and Grading Compared with Radiologist Assessment in Renal Colic: The EPHyDRA Study
Authors: Sameer A. Pathan, Biswadev Mitra, Salman Mirza, Umais Momin, Zahoor Ahmed, Lubna G. Andraous, Dharmesh Shukla, Mohammed Y. Shariff, Magid M. Makki, Tinsy T. George, Saad S. Khan, Stephen H. Thomas, Peter A. Cameron
Abstract:
Study objective: Emergency physician’s (EP) ability to identify hydronephrosis on point-of-care ultrasound (POCUS) has been assessed in the past using CT scan as the reference standard. We aimed to assess EP interpretation of POCUS to identify and grade the hydronephrosis in a direct comparison with the consensus-interpretation of POCUS by radiologists, and also to compare the EP and radiologist performance using CT scan as the criterion standard. Methods: Using data from a POCUS databank, a prospective interpretation study was conducted at an urban academic emergency department. All POCUS exams were performed on patients presenting with renal colic to the ED. Institutional approval was obtained for conducting this study. All the analyses were performed using Stata MP 14.0 (Stata Corp, College Station, Texas). Results: A total of 651 patients were included, with paired sets of renal POCUS video clips and the CT scan performed at the same ED visit. Hydronephrosis was reported in 69.6% of POCUS exams by radiologists and 72.7% of CT scans (p=0.22). The κ for consensus interpretation of POCUS between the radiologists to detect hydronephrosis was 0.77 (0.72 to 0.82) and weighted κ for grading the hydronephrosis was 0.82 (0.72 to 0.90), interpreted as good to very good. Using CT scan findings as the criterion standard, Eps had an overall sensitivity of 81.1% (95% CI: 79.6% to 82.5%), specificity of 59.4% (95% CI: 56.4% to 62.5%), PPV of 84.3% (95% CI: 82.9% to 85.7%), and NPV of 53.8% (95% CI: 50.8% to 56.7%); compared to radiologist sensitivity of 85.0% (95% CI: 82.5% to 87.2%), specificity of 79.7% (95% CI: 75.1% to 83.7%), PPV of 91.8% (95% CI: 89.8% to 93.5%), and NPV of 66.5% (95% CI: 61.8% to 71.0%). Testing for a report of moderate or high degree of hydronephrosis, specificity of EP was 94.6% (95% CI: 93.7% to 95.4%) and to 99.2% (95% CI: 98.9% to 99.5%) for identifying severe hydronephrosis alone. Conclusion: EP POCUS interpretations were comparable to the radiologists for identifying moderate to severe hydronephrosis using CT scan results as the criterion standard. Among patients with moderate or high pre-test probability of ureteric calculi, as calculated by the STONE-score, the presence of moderate to severe (+LR 6.3 and –LR 0.69) or severe hydronephrosis (+LR 54.4 and –LR 0.57) was highly diagnostic of the stone disease. Low dose CT is indicated in such patients for evaluation of stone size and location.Keywords: renal colic, point-of-care, ultrasound, bedside, emergency physician
Procedia PDF Downloads 284157 Bifurcations of the Rotations in the Thermocapillary Flows
Authors: V. Batishchev, V. Getman
Abstract:
We study the self-similar fluid flows in the Marangoni layers with the axial symmetry. Such flows are induced by the radial gradients of the temperatures whose distributions along the free boundary obey some power law. The self-similar solutions describe thermo-capillar flows both in the thin layers and in the case of infinite thickness. We consider both positive and negative temperature gradients. In the former case the cooling of free boundary nearby the axis of symmetry gives rise to the rotation of fluid. The rotating flow concentrates itself inside the Marangoni layer while outside of it the fluid does not revolve. In the latter case we observe no rotating flows at all. In the layers of infinite thickness the separation of the rotating flow creates two zones where the flows are directed oppositely. Both the longitudinal velocity and the temperature have exactly one critical point inside the boundary layer. It is worth to note that the profiles are monotonic in the case of non-swirling flows. We describe the flow outside the boundary layer with the use of self-similar solution of the Euler equations. This flow is slow and non-swirling. The introducing of an outer flow gives rise to the branching of swirling flows from the non-swirling ones. There is such the critical velocity of the outer flow that a non-swirling flow exists for supercritical velocities and cannot be extended to the sub-critical velocities. For the positive temperature gradients there are two non-swirling flows. For the negative temperature gradients the non-swirling flow is unique. We determine the critical velocity of the outer flow for which the branching of the swirling flows happens. In the case of a thin layer confined within free boundaries we show that the cooling of the free boundaries near the axis of symmetry leads to the separating of the layer and creates two sub-layers with opposite rotations inside. This makes sharp contrast with the case of infinite thickness. We show that such rotation arises provided the thickness of the layer exceed some critical value. In the case of a thin layer confined within free and rigid boundaries we construct the branching equation and the asymptotic approximation for the secondary swirling flows near the bifurcation point. It turns out that the bifurcation gives rise to one pair of the secondary swirling flows with different directions of swirl.Keywords: free surface, rotation, fluid flow, bifurcation, boundary layer, Marangoni layer
Procedia PDF Downloads 344156 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range to MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation
Procedia PDF Downloads 162155 Petrogenesis and Tectonic Implication of the Oligocene Na-Rich Granites from the North Sulawesi Arc, Indonesia
Authors: Xianghong Lu, Yuejun Wang, Chengshi Gan, Xin Qian
Abstract:
The North Sulawesi Arc, located on the east of Indonesia and to the south of the Celebes Sea, is the north part of the K-shape of Sulawesi Island and has a complex tectonic history since the Cenozoic due to the convergence of three plates (Eurasia, India-Australia and Pacific plates). Published rock records contain less precise chronology, mostly using K-Ar dating, and rare geochemistry data, which limit the understanding of the regional tectonic setting. This study presents detailed zircon U-Pb geochronological and Hf-O isotope and whole-rock geochemical analyses for the Na-rich granites from the North Sulawesi Arc. Zircon U-Pb geochronological analyses of three representative samples yield weighted mean ages of 30.4 ± 0.4 Ma, 29.5 ± 0.2 Ma, and 27.3 ± 0.4 Ma, respectively, revealing the Oligocene magmatism in the North Sulawesi Arc. The samples have high Na₂O and low K₂O contents with high Na₂O/K₂O ratios, belonging to Low-K tholeiitic Na-rich granites. The Na-rich granites are characterized by high SiO₂ contents (75.05-79.38 wt.%) and low MgO contents (0.07-0.91 wt.%) and show arc-like trace elemental signatures. They have low (⁸⁷Sr/⁸⁶Sr)i ratios (0.7044-0.7046), high εNd(t) values (from +5.1 to +6.6), high zircon εHf(t) values (from +10.1 to +18.8) and low zircon δ18O values (3.65-5.02). They show an Indian-Ocean affinity of Pb isotopic compositions with ²⁰⁶Pb/²⁰⁴Pb ratio of 18.16-18.37, ²⁰⁷Pb/²⁰⁴Pb ratio of 15.56-15.62, and ²⁰⁸Pb/²⁰⁴Pb ratio of 38.20-38.66. These geochemical signatures suggest that the Oligocene Na-rich granites from the North Sulawesi Arc formed by partial melting of the juvenile oceanic crust with sediment-derived fluid-related metasomatism in a subducting setting and support an intra-oceanic arc origin. Combined with the published study, the emergence of extensive calc-alkaline felsic arc magmatism can be traced back to the Early Oligocene period, subsequent to the Eocene back-arc basalts (BAB) that share similarity with the Celebes Sea basement. Since the opening of the Celebes Sea started from the Eocene (42~47 Ma) and stopped by the Early Oligocene (~32 Ma), the geodynamical mechanism of the formation of the Na-rich granites from the North Sulawesi Arc during the Oligocene might relate to the subduction of the Indian Ocean.Keywords: North Sulawesi Arc, oligocene, Na-rich granites, in-situ zircon Hf–O analysis, intra-oceanic origin
Procedia PDF Downloads 76154 DNA Nano Wires: A Charge Transfer Approach
Authors: S. Behnia, S. Fathizadeh, A. Akhshani
Abstract:
In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.Keywords: DNA conductivity, Landauer resistance, negative dierential resistance, Chaos theory, mean Lyapunov exponent
Procedia PDF Downloads 425153 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting
Authors: Daijun Chen
Abstract:
Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits
Procedia PDF Downloads 108152 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 389151 Effects of Warning Label on Cigarette Package on Consumer Behavior of Smokers in Batangas City Philippines
Authors: Irene H. Maralit
Abstract:
Warning labels have been found to inform smokers about the health hazards of smoking, encourage smokers to quit, and prevent nonsmokers from starting to smoke. Warning labels on tobacco products are an ideal way of communicating with smokers. Since the intervention is delivered at the time of smoking, nearly all smokers are exposed to warning labels and pack-a-day smokers could be exposed to the warnings more than 7,000 times per year. Given the reach and frequency of exposure, the proponents want to know the effect of warning labels on smoking behavior. Its aims to identify the profile of the smokers associated with its behavioral variables that best describe the users’ perception. The behavioral variables are AVOID, THINK RISK and FORGO. This research study aims to determine if there is significant relationship between the effect of warning labels on cigarette package on Consumer behavior when grouped according to profile variable. The researcher used quota sampling to gather representative data through purposive means to determine the accurate representation of data needed in the study. Furthermore, the data was gathered through the use of a self-constructed questionnaire. The statistical method used were Frequency count, Chi square, multi regression, weighted mean and ANOVA to determine the scale and percentage of the three variables. After the analysis of data, results shows that most of the respondents belongs to age range 22–28 years old with percentage of 25.3%, majority are male with a total number of 134 with percentage of 89.3% and single with total number of 79 and percentage of 52.7%, mostly are high school graduates with total number of 59 and percentage of 39.3, with regards to occupation, skilled workers have the highest frequency of 37 with 24.7%, Majority of the income of the respondents falls under the range of Php 5,001-Php10,000 with 50.7%. And also with regards to the number of sticks consumed per day falls under 6–10 got the highest frequency with 33.3%. The respondents THINK RISK factor got the highest composite mean which is 2.79 with verbal interpretation of agree. It is followed by FORGO with 2.78 composite mean and a verbal interpretation of agree and AVOID variable with composite mean of 2.77 with agree as its verbal interpretation. In terms of significant relationship on the effects of cigarette label to consumer behavior when grouped according to profile variable, sex and occupation found to be significant.Keywords: consumer behavior, smokers, warning labels, think risk avoid forgo
Procedia PDF Downloads 218150 Effects of Macroprudential Policies on BankLending and Risks
Authors: Stefanie Behncke
Abstract:
This paper analyses the effects of different macroprudential policy measures that have recently been implemented in Switzerland. Among them is the activation and the increase of the countercyclical capital buffer (CCB) and a tightening of loan-to-value (LTV) requirements. These measures were introduced to limit systemic risks in the Swiss mortgage and real estate markets. They were meant to affect mortgage growth, mortgage risks, and banks’ capital buffers. Evaluation of their quantitative effects provides insights for Swiss policymakers when reassessing their policy. It is also informative for policymakers in other countries who plan to introduce macroprudential instruments. We estimate the effects of the different macroprudential measures with a Differences-in-Differences estimator. Banks differ with respect to the relative importance of mortgages in their portfolio, their riskiness, and their capital buffers. Thus, some of the banks were more affected than others by the CCB, while others were more affected by the LTV requirements. Our analysis is made possible by an unusually informative bank panel data set. It combines data on newly issued mortgage loans and quantitative risk indicators such as LTV and loan-to-income (LTI) ratios with supervisory information on banks’ capital and liquidity situation and balance sheets. Our results suggest that the LTV cap of 90% was most effective. The proportion of new mortgages with a high LTV ratio was significantly reduced. This result does not only apply to the 90% LTV, but also to other threshold values (e.g. 80%, 75%) suggesting that the entire upper part of the LTV distribution was affected. Other outcomes such as the LTI distribution, the growth rates of mortgages and other credits, however, were not significantly affected. Regarding the activation and the increase of the CCB, we do not find any significant effects: neither LTV/LTI risk parameters nor mortgage and other credit growth rates were significantly reduced. This result may reflect that the size of the CCB (1% of relevant residential real estate risk-weighted assets at activation, respectively 2% at the increase) was not sufficiently high enough to trigger a distinct reaction between the banks most likely to be affected by the CCB and those serving as controls. Still, it might be have been effective in increasing the resilience in the overall banking system. From a policy perspective, these results suggest that targeted macroprudential policy measures can contribute to financial stability. In line with findings by others, caps on LTV reduced risk taking in Switzerland. To fully assess the effectiveness of the CCB, further experience is needed.Keywords: banks, financial stability, macroprudential policy, mortgages
Procedia PDF Downloads 362149 Mathematical Study of CO₂ Dispersion in Carbonated Water Injection Enhanced Oil Recovery Using Non-Equilibrium 2D Simulator
Authors: Ahmed Abdulrahman, Jalal Foroozesh
Abstract:
CO₂ based enhanced oil recovery (EOR) techniques have gained massive attention from major oil firms since they resolve the industry's two main concerns of CO₂ contribution to the greenhouse effect and the declined oil production. Carbonated water injection (CWI) is a promising EOR technique that promotes safe and economic CO₂ storage; moreover, it mitigates the pitfalls of CO₂ injection, which include low sweep efficiency, early CO₂ breakthrough, and the risk of CO₂ leakage in fractured formations. One of the main challenges that hinder the wide adoption of this EOR technique is the complexity of accurate modeling of the kinetics of CO₂ mass transfer. The mechanisms of CO₂ mass transfer during CWI include the slow and gradual cross-phase CO₂ diffusion from carbonated water (CW) to the oil phase and the CO₂ dispersion (within phase diffusion and mechanical mixing), which affects the oil physical properties and the spatial spreading of CO₂ inside the reservoir. A 2D non-equilibrium compositional simulator has been developed using a fully implicit finite difference approximation. The material balance term (k) was added to the governing equation to account for the slow cross-phase diffusion of CO₂ from CW to the oil within the gird cell. Also, longitudinal and transverse dispersion coefficients have been added to account for CO₂ spatial distribution inside the oil phase. The CO₂-oil diffusion coefficient was calculated using the Sigmund correlation, while a scale-dependent dispersivity was used to calculate CO₂ mechanical mixing. It was found that the CO₂-oil diffusion mechanism has a minor impact on oil recovery, but it tends to increase the amount of CO₂ stored inside the formation and slightly alters the residual oil properties. On the other hand, the mechanical mixing mechanism has a huge impact on CO₂ spatial spreading (accurate prediction of CO₂ production) and the noticeable change in oil physical properties tends to increase the recovery factor. A sensitivity analysis has been done to investigate the effect of formation heterogeneity (porosity, permeability) and injection rate, it was found that the formation heterogeneity tends to increase CO₂ dispersion coefficients, and a low injection rate should be implemented during CWI.Keywords: CO₂ mass transfer, carbonated water injection, CO₂ dispersion, CO₂ diffusion, cross phase CO₂ diffusion, within phase CO2 diffusion, CO₂ mechanical mixing, non-equilibrium simulation
Procedia PDF Downloads 176148 The Neuroscience Dimension of Juvenile Law Effectuates a Comprehensive Treatment of Youth in the Criminal System
Authors: Khushboo Shah
Abstract:
Categorical bans on the death penalty and life-without-parole sentences for juvenile offenders in a growing number of countries have established a new era in juvenile jurisprudence. This has been brought about by integration of the growing knowledge in cognitive neuroscience and appreciation of the inherent differences between adults and adolescents over the last ten years. This evolving understanding of being a child in the criminal system can be aptly reflected through policies that incorporate the mitigating traits of youth. First, the presentation will delineate the structures in cognitive neuroscience and in particular, focus on the prefrontal cortex, the amygdala, and the basal ganglia. These key anatomical structures in the brain are linked to three mitigating adolescent traits—an underdeveloped sense of responsibility, an increased vulnerability to negative influences, and transitory personality traits—that establish why juveniles have a lessened culpability. The discussion will delve into the details depicting how an underdeveloped prefrontal cortex results in the heightened emotional angst, high-energy and risky behavior characteristic of the adolescent time period or how the amygdala, the emotional center of the brain, governs different emotional expression resulting in why teens are susceptible to negative influences. Based on this greater understanding, it is incumbent that policies adequately reflect the adolescent physiology and psychology in the criminal system. However, it is important to ensure that these views are appropriately weighted while considering the jurisprudence for the treatment of children in the law. To ensure this balance is appropriately stricken, policies must incorporate the distinctive traits of youth in sentencing and legal considerations and yet refrain from the potential fallacies of absolving a juvenile offender of guilt and culpability. Accordingly, three policies will demonstrate how these results can be achieved: (1) eliminate housing of juvenile offenders in the adult prison system, (2) mandate fitness hearings for all transfers of juveniles to adult criminal court, and (3) use the post-disposition review as a type of rehabilitation method for juvenile offenders. Ultimately, this interdisciplinary approach of science and law allows for a better understanding of adolescent psychological and social functioning and can effectuate better legal outcomes for juveniles tried as adults.Keywords: criminal law, Juvenile Justice, interdisciplinary, neuroscience
Procedia PDF Downloads 327147 The Relationship between Risk and Capital: Evidence from Indian Commercial Banks
Authors: Seba Mohanty, Jitendra Mahakud
Abstract:
Capital ratio is one of the major indicators of the stability of the commercial banks. Pertinent to its pervasive importance, over the years the regulators, policy makers focus on the maintenance of the particular level of capital ratio to minimize the solvency and liquidation risk. In this context, it is very much important to identify the relationship between capital and risk and find out the factors which determine the capital ratios of commercial banks. The study examines the relationship between capital and risk of the commercial banks operating in India. Other bank specific variables like bank size, deposit, profitability, non-performing assets, bank liquidity, net interest margin, loan loss reserves, deposits variability and regulatory pressure are also considered for the analysis. The period of study is 1997-2015 i.e. the period of post liberalization. To identify the impact of financial crisis and implementation of Basel II on capital ratio, we have divided the whole period into two sub-periods i.e. 1997-2008 and 2008-2015. This study considers all the three types of commercial banks, i.e. public sector, the private sector and foreign banks, which have continuous data for the whole period. The main sources of data are Prowess data base maintained by centre for monitoring Indian economy (CMIE) and Reserve Bank of India publications. We use simultaneous equation model and more specifically Two Stage Least Square method to find out the relationship between capital and risk. From the econometric analysis, we find that capital and risk affect each other simultaneously, and this is consistent across the time period and across the type of banks. Moreover, regulation has a positive significant impact on the ratio of capital to risk-weighted assets, but no significant impact on the banks risk taking behaviour. Our empirical findings also suggest that size has a negative impact on capital and risk, indicating that larger banks increase their capital less than the other banks supported by the too-big-to-fail hypothesis. This study contributes to the existing body of literature by predicting a strong relationship between capital and risk in an emerging economy, where banking sector plays a majority role for financial development. Further this study may be considered as a primary study to find out the macro economic factors which affecting risk and capital in India.Keywords: capital, commercial bank, risk, simultaneous equation model
Procedia PDF Downloads 327146 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries
Authors: Malek Makki
Abstract:
The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.Keywords: developing countries, international marketing, non-profit organization, segmentation
Procedia PDF Downloads 302145 Beyond the “Breakdown” of Karman Vortex Street
Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal
Abstract:
A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown
Procedia PDF Downloads 404144 Level of Understanding of the Catholic Doctrines in Relation to the Way of Life of Ignatian Graduates
Authors: Maria Wendy Mendoza-Solomo
Abstract:
The study assessed the level of understanding of catholic doctrines in relation to the way of life of Ignatian graduates of Ateneo de Naga University (ADNU). It was conducted to find out if ADNU is successful in leading their students to a deeper moral understanding of the world centered on Jesus Christ through their curriculum, academic programs, activities and practices. This study further evaluated if their graduates live out their Catholic commitment to Christ in their current way of life. It also determined the factors that affected their level of understanding of Catholic doctrines and their current way of life. The descriptive, qualitative, evaluative and correlational analyses determined the level of understanding of the Catholic doctrines and the current way of life of 390 graduates. It also correlated the level of understanding to moral life and worship. The factors that affected the graduates’ level of understanding and their current way of life were measured. A researcher-made instrument was distributed to the respondents either using the traditional way or the online survey to reach out graduates across the globe. Major findings were (1) The weighted mean of graduates’ level of understanding of Catholic doctrines was 4.63. (2) Along moral life, 4.07 while along worship, 3.83. (3) The Catholic doctrines and moral life had Pearson r value of 0.79. The doctrines and worship, 0.87; and worship and moral life, 0.89. (4) The understanding of the doctrines was affected highly by the teacher factor with 4.09 mean. The moral life and worship were affected highly by the teacher and technological factors both ranked 1.5 (4.04). (5) Along Catholic doctrines, the teacher factor had 0.90 r value; and environmental, -0.40. Along moral life, teacher had r value of -0.30; technological (-0.92), socio-economic (-0.93), political (-0.83), and environmental (-0.90). Along worship, the teacher had 0.36 Pearson r value, technological and socio-economic (-0.78), political (-0.73) and environmental (-0.72). Major conclusions were: (1) Graduates had very high level of understanding of the Catholic doctrines as summarized in the Creed which is grounded in the Sacred Scriptures. (2) They live out this Catholic commitment to Christ by obeying the Commandments very extensively but needed more participation in religious and parish activities. They have overwhelming spirituality and religiosity in terms of receiving of sacraments and sacramental practices except reading the Bible and reflecting on its passages. (3) The graduates’ level of understanding of the Catholic doctrines had very strong correlation with their current way of life. (4) Teacher, socio-economic, technological, environmental, and political factors significantly affected their understanding of the Catholic doctrines and their current way of life. (5) The teacher factor had very strong relationship with the doctrines; technological and political, weak; environmental, moderate; and socio-economic, very weak relationship. The teacher factor had weak relationship but the other factors had very strong relationship with moral life and strong relationship with worship.Keywords: Catholic doctrines, Ignatian graduates, relationship, way of life
Procedia PDF Downloads 355143 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Authors: Mohammad R. Jalali, Mohammad M. Jalali
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves
Procedia PDF Downloads 285142 Application of Shore Protective Structures in Optimum Land Using of Defense Sites Located in Coastal Cities
Authors: Mir Ahmad Lashteh Neshaei, Hamed Afsoos Biria, Ata Ghabraei, Mir Abdolhamid Mehrdad
Abstract:
Awareness of effective land using issues in coastal area including protection of natural ecosystems and coastal environment due to the increasing of human life along the coast is of great importance. There are numerous valuable structures and heritages which are located in defence sites and waterfront area. Marine structures such as groins, sea walls and detached breakwaters are constructed in coast to improve the coast stability against bed erosion due to changing wave and climate pattern. Marine mechanisms and interaction with the shore protection structures need to be intensively studied. Groins are one of the most prominent structures that are used in shore protection to create a safe environment for coastal area by maintaining the land against progressive coastal erosion. The main structural function of a groin is to control the long shore current and littoral sediment transport. This structure can be submerged and provide the necessary beach protection without negative environmental impact. However, for submerged structures adopted for beach protection, the shoreline response to these structures is not well understood at present. Nowadays, modelling and computer simulation are used to assess beach morphology in the vicinity of marine structures to reduce their environmental impact. The objective of this study is to predict the beach morphology in the vicinity of submerged groins and comparison with non-submerged groins with focus on a part of the coast located in Dahane sar Sefidrood, Guilan province, Iran where serious coast erosion has occurred recently. The simulations were obtained using a one-line model which can be used as a first approximation of shoreline prediction in the vicinity of groins. The results of the proposed model are compared with field measurements to determine the shape of the coast. Finally, the results of the present study show that using submerged groins can have a good efficiency to control the beach erosion without causing severe environmental impact to the coast. The important outcome from this study can be employed in optimum designing of defence sites in the coastal cities to improve their efficiency in terms of re-using the heritage lands.Keywords: submerged structures, groin, shore protective structures, coastal cities
Procedia PDF Downloads 316141 The Reality of Food Scarcity in Madhya Pradesh: Is It a Glimpse or Not?
Authors: Kalyan Sundar Som, Ghanshyam Prasad Jhariya
Abstract:
Population growth is an important pervasive phenomenon in the world. Its survival depends upon many daily needs and food is one of them. Population factors play a decisive role in the human endeavor to attain food. Nutrition and health status compose integral part of human development and progress of a society. Therefore, the neglect any one of these components may leads to the deterioration of the quality of life. Food is also intimately related with economic growth and social progress as well as with political stability and peace. It refers to the availability of food and its access to it. It can be observed from global to local level. Food scarcity has emerged as a matter of great concern all over the world due to uncontrolled and unregulated growth of population .For this purpose this study try to find out the deficit or surplus production of food availability in terms of their total population in the study area. It also ascertains the population pressure, demand and supply of food stuff and demarcation of insecure areas.The data base of the study under discussion includes government published data regarding agriculture production, yield and cropped area in 2005-06 to 2011-12 available at commissioner land record Madhya Pradesh, Gwalior. It also includes the census of India for population data. For measuring food security or insecurity regions is based on the consumption of net food available in terms caloric value minus the consumption by the weighted total population. This approach has been adopted because the direct estimate of production and consumption is the only reliable way to ascertain food security in a unit area and to compare one area with another (Noor Mohammad, dec. 2002). The scenario in 2005-06 is 57.78 percent district has food insufficient in terms of their population. On the other hand after 5 years, there are only 22 % districts are deficit in term of food availability where Burhanpur is the most deficit (56 percent) district. While 20% district are highly surplus district in the state where Harda and Hoshangabad districts are very high surplus district (5 times and 3.95 times) in term of food availability(2011). The drastic change (agriculture transformation) is happen due government good intervention in the agricultural sector.Keywords: agriculture transformation, caloric value method, deficit or surplus region, population pressure
Procedia PDF Downloads 439140 Development of Numerical Method for Mass Transfer across the Moving Membrane with Selective Permeability: Approximation of the Membrane Shape by Level Set Method for Numerical Integral
Authors: Suguru Miyauchi, Toshiyuki Hayase
Abstract:
Biological membranes have selective permeability, and the capsules or cells enclosed by the membrane show the deformation by the osmotic flow. This mass transport phenomenon is observed everywhere in a living body. For the understanding of the mass transfer in a body, it is necessary to consider the mass transfer phenomenon across the membrane as well as the deformation of the membrane by a flow. To our knowledge, in the numerical analysis, the method for mass transfer across the moving membrane has not been established due to the difficulty of the treating of the mass flux permeating through the moving membrane with selective permeability. In the existing methods for the mass transfer across the membrane, the approximate delta function is used to communicate the quantities on the interface. The methods can reproduce the permeation of the solute, but cannot reproduce the non-permeation. Moreover, the computational accuracy decreases with decreasing of the permeable coefficient of the membrane. This study aims to develop the numerical method capable of treating three-dimensional problems of mass transfer across the moving flexible membrane. One of the authors developed the numerical method with high accuracy based on the finite element method. This method can capture the discontinuity on the membrane sharply due to the consideration of the jumps in concentration and concentration gradient in the finite element discretization. The formulation of the method takes into account the membrane movement, and both permeable and non-permeable membranes can be treated. However, searching the cross points of the membrane and fluid element boundaries and splitting the fluid element into sub-elements are needed for the numerical integral. Therefore, cumbersome operation is required for a three-dimensional problem. In this paper, we proposed an improved method to avoid the search and split operations, and confirmed its effectiveness. The membrane shape was treated implicitly by introducing the level set function. As the construction of the level set function, the membrane shape in one fluid element was expressed by the shape function of the finite element method. By the numerical experiment, it was found that the shape function with third order appropriately reproduces the membrane shapes. The same level of accuracy compared with the previous method using search and split operations was achieved by using a number of sampling points of the numerical integral. The effectiveness of the method was confirmed by solving several model problems.Keywords: finite element method, level set method, mass transfer, membrane permeability
Procedia PDF Downloads 250139 Discovering Event Outliers for Drug as Commercial Products
Authors: Arunas Burinskas, Aurelija Burinskiene
Abstract:
On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.Keywords: drugs, Grubbs' test, outlier, shortage event
Procedia PDF Downloads 134138 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 371137 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 203