Search results for: conditional proportional reversed hazard rate model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 23655

Search results for: conditional proportional reversed hazard rate model

16305 A Case Study on Machine Learning-Based Project Performance Forecasting for an Urban Road Reconstruction Project

Authors: Soheila Sadeghi

Abstract:

In construction projects, predicting project performance metrics accurately is essential for effective management and successful delivery. However, conventional methods often depend on fixed baseline plans, disregarding the evolving nature of project progress and external influences. To address this issue, we introduce a distinct approach based on machine learning to forecast key performance indicators, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category within an urban road reconstruction project. Our proposed model leverages time series forecasting techniques, namely Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance by analyzing historical data and project progress. Additionally, the model incorporates external factors, including weather patterns and resource availability, as features to improve forecast accuracy. By harnessing the predictive capabilities of machine learning, our performance forecasting model enables project managers to proactively identify potential deviations from the baseline plan and take timely corrective measures. To validate the effectiveness of the proposed approach, we conduct a case study on an urban road reconstruction project, comparing the model's predictions with actual project performance data. The outcomes of this research contribute to the advancement of project management practices in the construction industry by providing a data-driven solution for enhancing project performance monitoring and control.

Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, schedule variance, earned value management

Procedia PDF Downloads 31
16304 Development of a Novel Nanobiosystem for the Selective Nanophotothermolysis of Meticilin Resistant Staphyloccocous Aureus Using Anti-MRSA Antibody Functionalized Gold Nanoparticles

Authors: Lucian Mocan, Cristian Matea, Flaviu A. Tabaran, Teodora Mocan, Cornel Iancu

Abstract:

Introduction: Due to antibiotic resistance, systemic infections caused by Meticilin resistant Staphyloccocous Aureus (MRSA) are the main cause of millions of deaths each year. Development of new active biomolecules that are highly effective and refractory to antibiotic resistance may open new avenues in the field of antimicrobial therapy. In this research, we have focused on the development of a novel nanobiosystem with high affinity for MRSA microorganism to mediate its selective laser thermal ablation. Materials and Methods: Gold nanoparticles (15nm in diameter) linked to a specific antibody against MRSA surface were selectively delivered (at various concentrations and incubation times) and internalized into MRSA microorganism following the treatment these multidrug-resistant bacteria were irradiated using a 2w, 808 nm LASER. Results and Discussions: The post-irradiation necrotic rate ranged from 51.2% (for 1 mg/L) to 87.3% (for 50 mg/L) at 60 seconds (p<0.001), while at 30 minute the necrotic rate increased from 64.3% (1 mg/L) to 92.1% (50 mg/L), p value<0.001. Significantly lower apoptotic rates were obtained in irradiated MRSA treated with GNPs only (control) treated for 60 seconds and 30 minutes at concentrations ranging from 1 mg/L to 50 mg/L. We show here that the optimal LASER mediated the necrotic effect of MRSA after incubation with anti-MRSA-Ab was obtained at a concentration of 50 mg/L. Conclusion: In the presented research, we obtained a very efficacious pulse laser mode treatment of individual MRSA agents with minimal effects on the surrounding medium, providing highly localized destruction only for MRSA microorganism.

Keywords: MRSA, photothermolysis, antibiotic resistance, gold nanoparticles

Procedia PDF Downloads 428
16303 Study of Transport in Electronic Devices with Stochastic Monte Carlo Method: Modeling and Simulation along with Submicron Gate (Lg=0.5um)

Authors: N. Massoum, B. Bouazza

Abstract:

In this paper, we have developed a numerical simulation model to describe the electrical properties of GaInP MESFET with submicron gate (Lg = 0.5 µm). This model takes into account the three-dimensional (3D) distribution of the load in the short channel and the law effect of mobility as a function of electric field. Simulation software based on a stochastic method such as Monte Carlo has been established. The results are discussed and compared with those of the experiment. The result suggests experimentally that, in a very small gate length in our devices (smaller than 40 nm), short-channel tunneling explains the degradation of transistor performance, which was previously enhanced by velocity overshoot.

Keywords: Monte Carlo simulation, transient electron transport, MESFET device, simulation software

Procedia PDF Downloads 506
16302 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor

Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh

Abstract:

Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.

Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging

Procedia PDF Downloads 251
16301 Social Entrepreneurship on Islamic Perspective: Identifying Research Gap

Authors: Mohd Adib Abd Muin, Shuhairimi Abdullah, Azizan Bahari

Abstract:

Problem: The research problem is lacking of model on social entrepreneurship that focus on Islamic perspective. Objective: The objective of this paper is to analyse the existing model on social entrepreneurship and to identify the research gap on Islamic perspective from existing models. Research Methodology: The research method used in this study is literature review and comparative analysis from 6 existing models of social entrepreneurship. Finding: The research finding shows that 6 existing models on social entrepreneurship has been analysed and it shows that the existing models on social entrepreneurship do not emphasize on Islamic perspective.

Keywords: social entrepreneurship, Islamic perspective, research gap, business management

Procedia PDF Downloads 346
16300 Development of Site-Specific Colonic Drug Delivery System (Nanoparticles) of Chitosan Coated with pH Sensitive Polymer for the Management of Colonic Inflammation

Authors: Pooja Mongia Raj, Rakesh Raj, Alpana Ram

Abstract:

Background: The use of multiparticulate drug delivery systems in preference to single unit dosage forms for colon targeting purposes dates back to 1985 when Hardy and co-workers showed that multiparticulate systems enabled the drug to reach the colon quickly and were retained in the ascending colon for a relatively long period of time. Methods: Site-specific colonic drug delivery system (nanoparticles) of 5-ASA were prepared and coated with pH sensitive polymer. Chitosan nanoparticles (CTNP) bearing 5-Amino salicylic acid (5-ASA) were prepared, by ionotropic gelation method. Nanoparticulate dosage form consisting of a hydrophobic core enteric coated with pH-dependent polymer Eudragit S-100 by solvent evaporation method, for the effective delivery of drug to the colon for treatment of ulcerative colitis. Results: The mean diameter of CTNP and ECTNP formulations were 159 and 661 nm, respectively. Also optimum value of polydispersity index was found to be 0.249 [count rate (kcps) was 251.2] and 0.170 [count rate (kcps) was 173.9] was obtained for both the formulations respectively. Conclusion: CTNP and Eudragit chitosan nanoparticles (ECTNP) was characterized for shape and surface morphology by scanning electron microscopy (SEM) appeared to be spherical in shape. The in vitro drug release was investigated using USP dissolution test apparatus in different simulated GIT fluids showed promising release. In vivo experiments are in further proceeding for fruitful results.

Keywords: colon targeting, nanoparticles, polymer, 5-amino salicylic acid, edragit

Procedia PDF Downloads 488
16299 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria

Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar

Abstract:

The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.

Keywords: earthquake, modeling, liquefaction potential, laboratory investigations

Procedia PDF Downloads 349
16298 Level of Application of Integrated Talent Management According To IBM Institute for Business Value Case Study Palestinian Governmental Agencies in Gaza Strip

Authors: Iyad A. A. Abusahloub

Abstract:

This research aimed to measure the level of perception and application of Integrated Talent Management according to IBM standards, by the upper and middle categories in Palestinian government institutions in Gaza, using a descriptive-analytical method. Using a questionnaire based on the standards of the IBM Institute for Business Value, the researcher added a second section to measure the perception of integrated talent management, the sample was 248 managers. The SPSS package was used for statistical analysis. The results showed that government institutions in Gaza apply Integrated Talent Management according to IBM standards at a medium degree did not exceed 59.8%, there is weakness in the perception of integrated talent management at the level of 53.6%, and there is a strong correlation between (Integrated Talent Management) and (the perception of the integrated talent management) amounted to 92.9%, and 88.9% of the change in the perception of the integrated talent management is by (motivate and develop, deploy and manage, connect and enable, and transform and sustain) talents, and 11.1% is by other factors. Conclusion: This study concluded that the integrated talent management model presented by IBM with its six dimensions is an effective model to reach your awareness and understanding of talent management, especially that it must rely on at least four basic dimensions out of the six dimensions: 1- Stimulating and developing talent. 2- Organizing and managing talent. 3- Connecting with talent and empowering it. 4- Succession and sustainability of talent. Therefore, this study recommends the adoption of the integrated talent management model provided by IBM to any organization across the world, regardless of its specialization or size, to reach talent sustainability.

Keywords: HR, talent, talent management, IBM

Procedia PDF Downloads 78
16297 Reinforcing The Nagoya Protocol through a Coherent Global Intellectual Property Framework: Effective Protection for Traditional Knowledge Associated with Genetic Resources in Biodiverse African States

Authors: Oluwatobiloba Moody

Abstract:

On October 12, 2014, the Nagoya Protocol, negotiated by Parties to the Convention on Biological Diversity (CBD), entered into force. The Protocol was negotiated to implement the third objective of the CBD which relates to the fair and equitable sharing of benefits arising from the utilization of genetic resources (GRs). The Protocol aims to ‘protect’ GRs and traditional knowledge (TK) associated with GRs from ‘biopiracy’, through the establishment of a binding international regime on access and benefit sharing (ABS). In reflecting on the question of ‘effectiveness’ in the Protocol’s implementation, this paper argues that the underlying problem of ‘biopiracy’, which the Protocol seeks to address, is one which goes beyond the ABS regime. It rather thrives due to indispensable factors emanating from the global intellectual property (IP) regime. It contends that biopiracy therefore constitutes an international problem of ‘borders’ as much as of ‘regimes’ and, therefore, while the implementation of the Protocol may effectively address the ‘trans-border’ issues which have hitherto troubled African provider countries in their establishment of regulatory mechanisms, it remains unable to address the ‘trans-regime’ issues related to the eradication of biopiracy, especially those issues which involve the IP regime. This is due to the glaring incoherence in the Nagoya Protocol’s implementation and the existing global IP system. In arriving at conclusions, the paper examines the ongoing related discussions within the IP regime, specifically those within the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) and the WTO TRIPS Council. It concludes that the Protocol’s effectiveness in protecting TK associated with GRs is conditional on the attainment of outcomes, within the ongoing negotiations of the IP regime, which could be implemented in a coherent manner with the Nagoya Protocol. It proposes specific ways to achieve this coherence. Three main methodological steps have been incorporated in the paper’s development. First, a review of data accumulated over a two year period arising from the coordination of six important negotiating sessions of the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. In this respect, the research benefits from reflections on the political, institutional and substantive nuances which have coloured the IP negotiations and which provide both the context and subtext to emerging texts. Second, a desktop review of the history, nature and significance of the Nagoya Protocol, using relevant primary and secondary literature from international and national sources. Third, a comparative analysis of selected biopiracy cases is undertaken for the purpose of establishing the inseparability of the IP regime and the ABS regime in the conceptualization and development of solutions to biopiracy. A comparative analysis of select African regulatory mechanisms (Kenya, South Africa and Ethiopia and the ARIPO Swakopmund Protocol) for the protection of TK is also undertaken.

Keywords: biopiracy, intellectual property, Nagoya protocol, traditional knowledge

Procedia PDF Downloads 426
16296 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 122
16295 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers

Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha

Abstract:

Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.

Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer

Procedia PDF Downloads 159
16294 Composite Electrospun Aligned PLGA/Curcumin/Heparin Nanofibrous Membranes for Wound Dressing Application

Authors: Jyh-Ping Chen, Yu-Tin Lai

Abstract:

Wound healing is a complicated process involving overlapping hemostasis, inflammation, proliferation, and maturation phases. Ideal wound dressings can replace native skin functions in full thickness skin wounds through faster healing rate and also by reducing scar formation. Poly(lactic-co-glycolic acid) (PLGA) is an U.S. FDA approved biodegradable polymer to be used as ideal wound dressing material. Several in vitro and in vivo studies have demonstrated the effectiveness of curcumin in decreasing the release of inflammatory cytokines, inhibiting enzymes associated with inflammations, and scavenging free radicals that are the major cause of inflammation during wound healing. Heparin has binding affinities to various growth factors. With the unique and beneficial features offered by those molecules toward the complex process of wound healing, we postulate a composite wound dressing constructed from PLGA, curcumin and heparin would be a good candidate to accelerate scarless wound healing. In this work, we use electrospinning to prepare curcumin-loaded aligned PLGA nanofibrous membranes (PC NFMs). PC NFMs were further subject to oxygen plasma modification and surfaced-grafted with heparin through carbodiimide-mediated covalent bond formation to prepare curcumin-loaded PLGA-g-heparin (PCH) NFMs. The nanofibrous membranes could act as three-dimensional scaffolds to attract fibroblast migration, reduce inflammation, and increase wound-healing related growth factors concentrations at wound sites. From scanning electron microscopy analysis, the nanofibers in each NFM are with diameters ranging from 456 to 479 nm and with alignment angles within  0.5°. The NFMs show high tensile strength and good water absorptivity and provide suitable pore size for nutrients/wastes transport. Exposure of human dermal fibroblasts to the extraction medium of PC or PCH NFM showed significant protective effects against hydrogen peroxide than PLGA NFM. In vitro wound healing assays also showed that the extraction medium of PCH NFM showed significantly better migration ability toward fibroblasts than PC NFM, which is further better than PLGA NFM. The in vivo healing efficiency of the NFMs was further evaluated by a full thickness excisional wound healing diabetic rat model. After 14 days, PCH NFMs exhibits 86% wound closure rate, which is significantly different from other groups (79% for PC and 73% for PLGA NFM). Real-time PCR analysis indicated PC and PCH NFMs down regulated anti-oxidative enzymes like glutathione peroxidase (GPx) and superoxide dismutase (SOD), which are well-known transcription factors involved in cellular inflammatory responses to stimuli. From histology, the wound area treated with PCH NFMs showed more vascular lumen formation from immunohistochemistry of α-smooth muscle actin. The wound site also had more collagen type III (65.8%) expression and less collagen type I (3.5%) expression, indicating scar-less wound healing. From Western blot analysis, the PCH NFM showed good affinity toward growth factors from increased concentration of transforming growth factor-β (TGF-β) and fibroblast growth factor-2 (FGF-2) at the wound site to accelerate wound healing. From the results, we suggest PCH NFM as a promising candidate for wound dressing applications.

Keywords: Curcumin, heparin, nanofibrous membrane, poly(lactic-co-glycolic acid) (PLGA), wound dressing

Procedia PDF Downloads 151
16293 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.

Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles

Procedia PDF Downloads 582
16292 Heat Transfer Analysis of a Multiphase Oxygen Reactor Heated by a Helical Tube in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

In the thermochemical water splitting process by Cu-Cl cycle, oxygen gas is produced by an endothermic thermolysis process at a temperature of 530oC. Oxygen production reactor is a three-phase reactor involving cuprous chloride molten salt, copper oxychloride solid reactant and oxygen gas. To perform optimal performance, the oxygen reactor requires accurate control of heat transfer to the molten salt and decomposing solid particles within the thermolysis reactor. In this paper, the scale up analysis of the oxygen reactor that is heated by an internal helical tube is performed from the perspective of heat transfer. A heat balance of the oxygen reactor is investigated to analyze the size of the reactor that provides the required heat input for different rates of hydrogen production. It is found that the helical tube wall and the service side constitute the largest thermal resistances of the oxygen reactor system. In the analysis of this paper, the Cu-Cl cycle is assumed to be heated by two types of nuclear reactor, which are HTGR and CANDU SCWR. It is concluded that using CANDU SCWR requires more heat transfer rate by 3-4 times than that when using HTGR. The effect of the reactor aspect ratio is also studied and it is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Comparisons between the results of this study and pervious results of material balances in the oxygen reactor show that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: heat transfer, Cu-Cl cycle, hydrogen production, oxygen, clean energy

Procedia PDF Downloads 257
16291 Fate of Sustainability and Land Use Array in Urbanized Cities

Authors: Muhammad Yahaya Ubale

Abstract:

Substantial rate of urbanization as well as economic growth is the tasks and prospects of sustainability. Objectives of the paper are: to ascertain the fate of sustainability in urbanized cities and; to identify the challenges of land use array in urbanized cities. Methodology engaged in this paper employed the use of secondary data where articles, conference proceedings, seminar papers and literature materials were effectively used. The paper established the fact that while one thinks globally, it is reciprocal to act locally if at all sustainability should be achieved. The speed and scale of urbanization must be equal to natural and cost-effective deliberations. It also discovered a podium that allows a city to work together as an ideal conglomerate, engaging all city departments as a source of services, engaging residents, businesses, and contractors. It also revealed that city should act as a leader and partner within an urban region, engaging senior government officials, utilities, rural settlements, private sector stakeholders, NGOs, and academia. Cities should assimilate infrastructure system design and management to enhance efficiency of resource flows in an urban area. They should also coordinate spatial development; integrate urban forms and urban flows, combine land use, urban design, urban density, and other spatial attributes with infrastructural development. Finally, by 2050, urbanized cities alone could be consuming 140 billion tons of minerals, ores, fossil fuels and biomass annually (three times its current rate of consumption), sustainability can be accomplished through land use control, limited access to finite resources, facilities, utilities and services as well as property right and user charge.

Keywords: sustainability, land use array, urbanized cities, fate of sustainability and perseverance

Procedia PDF Downloads 264
16290 The Prototype of the Solar Energy Utilization for the Finding Sustainable Conditions in the Future: The Solar Community with 4000 Dwellers 960 Families, equal to 480 Solar Dwelling Houses and 32 Mansion Buildings (480 Dwellers)

Authors: Kunihisa Kakumoto

Abstract:

This technical paper is for the prototype of solar energy utilization for finding sustainable conditions. This model has been simulated under the climate conditions in Japan. At the beginning of the study, the solar model house was built up on site. And the concerned data was collected in this model house for several years. On the basis of these collected data, the concept on the solar community was built up. For the finding sustainable conditions, the amount of the solar energy generation and its reduction of carbon dioxide and the reduction of carbon dioxide by the green planting and the amount of carbon dioxide according to the normal daily life in the solar community and the amount of the necessary water for the daily life in the solar community and the amount of the water supply by the rainfall on-site were calculated. These all values were taken into consideration. The relations between each calculated result are shown in the expression of inequality. This solar community and its consideration for finding sustainable conditions can be one prototype to do the feasibility study for our life in the future

Keywords: carbon dioxide, green planting, smart city, solar community, sustainable condition, water activity

Procedia PDF Downloads 278
16289 The Survival of Bifidobacterium longum in Frozen Yoghurt Ice Cream and Its Properties Affected by Prebiotics (Galacto-Oligosaccharides and Fructo-Oligosaccharides) and Fat Content

Authors: S. Thaiudom, W. Toommuangpak

Abstract:

Yoghurt ice cream (YIC) containing prebiotics and probiotics seems to be much more recognized among consumers who concern for their health. Not only can it be a benefit on consumers’ health but also its taste and freshness provide people easily accept. However, the survival of such probiotic especially Bifidobacterium longum, found in human gastrointestinal tract and to be benefit to human gut, was still needed to study in the severe condition as whipping and freezing in ice cream process. Low and full-fat yoghurt ice cream containing 2 and 10% (w/w) fat content (LYIC and FYIC), respectively was produced by mixing 20% yoghurt containing B. longum into milk ice cream mix. Fructo-oligosaccharides (FOS) or galacto-oligosaccharides (GOS) at 0, 1, and 2% (w/w) were separately used as prebiotic in order to improve the survival of B. longum. Survival of this bacteria as a function of ice cream storage time and ice cream properties were investigated. The results showed that prebiotic; especially FOS could improve viable count of B. longum. The more concentration of prebiotic used, the more is the survival of B. Longum. These prebiotics could prolong the survival of B. longum up to 60 days, and the amount of survival number was still in the recommended level (106 cfu per gram). Fat content and prebiotic did not significantly affect the total acidity and the overrun of all samples, but an increase of fat content significantly increased the fat particle size which might be because of partial coalescence found in FYIC rather than in LYIC. However, addition of GOS or FOS could reduce the fat particle size, especially in FYIC. GOS seemed to reduce the hardness of YIC rather than FOS. High fat content (10% fat) significantly influenced on lowering the melting rate of YIC better than 2% fat content due to the 3-dimension networks of fat partial coalescence theoretically occurring more in FYIC than in LYIC. However, FOS seemed to retard the melting rate of ice cream better than GOS. In conclusion, GOS and FOS in YIC with different fat content can enhance the survival of B. longum and affect physical and chemical properties of such yoghurt ice cream.

Keywords: Bifidobacterium longum, prebiotic, survival, yoghurt ice cream

Procedia PDF Downloads 152
16288 Grammar as a Logic of Labeling: A Computer Model

Authors: Jacques Lamarche, Juhani Dickinson

Abstract:

This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.

Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar

Procedia PDF Downloads 30
16287 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 186
16286 Proposal of a Model Supporting Decision-Making on Information Security Risk Treatment

Authors: Ritsuko Kawasaki, Takeshi Hiromatsu

Abstract:

Management is required to understand all information security risks within an organization, and to make decisions on which information security risks should be treated in what level by allocating how much amount of cost. However, such decision-making is not usually easy, because various measures for risk treatment must be selected with the suitable application levels. In addition, some measures may have objectives conflicting with each other. It also makes the selection difficult. Therefore, this paper provides a model which supports the selection of measures by applying multi-objective analysis to find an optimal solution. Additionally, a list of measures is also provided to make the selection easier and more effective without any leakage of measures.

Keywords: information security risk treatment, selection of risk measures, risk acceptance, multi-objective optimization

Procedia PDF Downloads 370
16285 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 142
16284 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 644
16283 Intelligent Diagnostic System of the Onboard Measuring Devices

Authors: Kyaw Zin Htut

Abstract:

In this article, the synthesis of the efficiency of intelligent diagnostic system in the aircraft measuring devices is described. The technology developments of the diagnostic system are considered based on the model errors of the gyro instruments, which are used to measure the parameters of the aircraft. The synthesis of the diagnostic intelligent system is considered on the example of the problem of assessment and forecasting errors of the gyroscope devices on the onboard aircraft. The result of the system is to detect of faults of the aircraft measuring devices as well as the analysis of the measuring equipment to improve the efficiency of its work.

Keywords: diagnostic, dynamic system, errors of gyro instruments, model errors, assessment, prognosis

Procedia PDF Downloads 391
16282 A Study of Social Media Users’ Switching Behavior

Authors: Chiao-Chen Chang, Yang-Chieh Chin

Abstract:

Social media has created a change in the way the network community is clustered, especially from the location of the community, from the original virtual space to the intertwined network, and thus the communication between people will change from face to face communication to social media-based communication model. However, social media users who have had a fixed engagement may have an intention to switch to another service provider because of the emergence of new forms of social media. For example, some of Facebook or Twitter users switched to Instagram in 2014 because of social media messages or image overloads, and users may seek simpler and instant social media to become their main social networking tool. This study explores the impact of system features overload, information overload, social monitoring concerns, problematic use and privacy concerns as the antecedents on social media fatigue, dissatisfaction, and alternative attractiveness; further influence social media switching. This study also uses the online questionnaire survey method to recover the sample data, and then confirm the factor analysis, path analysis, model fit analysis and mediating analysis with the structural equation model (SEM). Research findings demonstrated that there were significant effects on multiple paths. Based on the research findings, this study puts forward the implications of theory and practice.

Keywords: social media, switching, social media fatigue, alternative attractiveness

Procedia PDF Downloads 135
16281 Novel Adaptive Radial Basis Function Neural Networks Based Approach for Short-Term Load Forecasting of Jordanian Power Grid

Authors: Eyad Almaita

Abstract:

In this paper, a novel adaptive Radial Basis Function Neural Networks (RBFNN) algorithm is used to forecast the hour by hour electrical load demand in Jordan. A small and effective RBFNN model is used to forecast the hourly total load demand based on a small number of features. These features are; the load in the previous day, the load in the same day in the previous week, the temperature in the same hour, the hour number, the day number, and the day type. The proposed adaptive RBFNN model can enhance the reliability of the conventional RBFNN after embedding the network in the system. This is achieved by introducing an adaptive algorithm that allows the change of the weights of the RBFNN after the training process is completed, which will eliminates the need to retrain the RBFNN model again. The data used in this paper is real data measured by National Electrical Power co. (Jordan). The data for the period Jan./2012-April/2013 is used train the RBFNN models and the data for the period May/2013- Sep. /2013 is used to validate the models effectiveness.

Keywords: load forecasting, adaptive neural network, radial basis function, short-term, electricity consumption

Procedia PDF Downloads 337
16280 Study the Relationship amongst Digital Finance, Renewable Energy, and Economic Development of Least Developed Countries

Authors: Fatima Sohail, Faizan Iftikhar

Abstract:

This paper studies the relationship between digital finance, renewable energy, and the economic development of Pakistan and least developed countries from 2000 to 2022. The paper used panel analysis and generalized method of moments Arellano-Bond approaches. The findings show that under the growth model, renewable energy (RE) has a strong and favorable link with fixed broadband and mobile subscribers. However, FB and MD have a strong but negative association with the uptake of renewable energy (RE) in the average and simple model. This paper provides valuable insights for policymakers, investors of the digital economy.

Keywords: digital finance, renewable energy, economic development, mobile subscription, fixed broadband

Procedia PDF Downloads 28
16279 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach

Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh

Abstract:

This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.

Keywords: river stage-discharge process, LSSVM, discrete wavelet transform, Ensemble Empirical Decomposition Mode, multi-station modeling

Procedia PDF Downloads 171
16278 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 90
16277 Supplier Relationship Management Model for Sme’s E-Commerce Transaction Broker Case Study: Hotel Rooms Provider

Authors: Veronica S. Moertini, Niko Ibrahim, Verliyantina

Abstract:

As market intermediary firms, e-commerce transaction broker firms need to strongly collaborate with suppliers in order to develop brands seek by customers. Developing suitable electronic Supplier Relationship Management (e-SRM) system is the solution to the need. In this paper, we propose our concept of e-SRM for transaction brokers owned by small medium enterprises (SMEs), which includes the integrated e-SRM and e-CRM architecture, the e-SRM applications with their functions. We then discuss the customization and implementation of the proposed e-SRM model in a specific transaction broker selling hotel rooms, which owned by an SME, KlikHotel.com. The implementation of the e-SRM in KlikHotel.com has been successfully boosting the number of suppliers (hotel members) and hotel room sales.

Keywords: e-CRM, e-SRM, SME, transaction broker

Procedia PDF Downloads 489
16276 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 158