Jump to content
Business Intelligence & Analytics Community

All Activity

This stream auto-updates     

  1. Today
  2. Last week
  3. Earlier
  4. 6 Predictions about Data Science, Machine Learning, and AI for 2018 By William Vorhies Summary: Here are our 6 predictions for data science, machine learning, and AI for 2018. Some are fast track and potentially disruptive, some take the hype off over blown claims and set realistic expectations for the coming year. It’s that time of year again when we do a look back in order to offer a look forward. What trends will speed up, what things will actually happen, and what things won’t in the coming year for data science, machine learning, and AI. We’ve been watching and reporting on these trends all year and we scoured the web and some of our professional contacts to find out what others are thinking. There are only a handful of trends and technologies that look to disrupt or speed ahead. These are probably the most interesting in any forecast. But it also valuable to discuss trends we think are a tad overblown and won’t accelerate as fast as some others believe. So with a little of both, here’s what we concluded. Prediction 1: Both model production and data prep will become increasingly automated. Larger data science operations will converge on a single platform (of many available). Both of these trends are in response to the groundswell movement for efficiency and effectiveness. In a nutshell allowing fewer data scientists to do the work of many. The core challenge is that there remains a structural shortage of data scientists. Whenever a pain point like this emerges we expect the market to respond and these two elements are its response. Both come at this from slightly different angles. The first is that although the great majority of fresh new data scientists have learned their trade in either R or Python that having a large team freelancing directly in code is extremely difficult to manage for consistency and accuracy, much less to debug. All the way back in their 2016 Magic Quadrant for Advanced Analytic Platforms, Gartner called this outand wouldn’t even rate companies that failed to provide a Visual Composition Framework (drag-and-drop elements of code) as a critical requirement. Gartner is very explicit that working in code is incompatible with the large organization’s need for quality, consistency, collaboration, speed, and ease of use. Langley Eide, Chief Strategy Officer at Alteryx offered this same prediction, that “data science will break free from code dependence. In 2018, we’ll see increased adoption of common frameworks for encoding, managing and deploying Machine Learning and analytic processes. The value of data science will become less about the code itself and more about the application of techniques. We’ll see the need for a common, code-agnostic platform where LOB analysts and data scientists alike can preserve existing work and build new analytics going forward.” The second element of this prediction which I do believe is disruptive in its implications is the very rapid evolution of Automated Machine Learning. The first of these appeared just over a year ago and I’ve written several times about the now 7 or 8 competitors in this field such as DataRobot, Xpanse Analytics, and PurePredictive. These AML platforms have achieved one-click-data-in-model-out convenience with very good accuracy. Several of these vendors have also done a creditable job of automating data prep including feature creation and selection. Gartner says that by 2020, more than 40% of data science tasks will be automated. Hardly a month goes by without a new platform contacting me wanting to be recognized on this list. And if you look into the clients many have already acquired you will find a very impressive list of high volume data science shops in insurance, lending, telecoms, and the like. Even large traditional platforms like SAS offer increasingly automated modules for high volume model creation and maintenance, and many of the smaller platforms like BigML have followed suite with greatly simplified if not fully automated user interfaces. Prediction 2: Data Science continues to develop specialties that mean the mythical ‘full stack’ data scientist will disappear. This prediction may already have come true. There may be some smaller companies that haven’t yet got the message but trying to find a single data scientist, regardless of degree or years of experience, who can do it all just isn’t in the cards. First there is the split between specialists in deep learning and predictive analytics. It’s possible now to devote your career to just CNNs or RNNs, work in Tensorflow, and never touch or understand a classical consumer preference model. Similarly, the needs of different industries have so diverged in their special applications of predictive analytics that industry experience is just as important as data science skill. In telecoms and insurance it’s about customer preference, retention, and rates. In ecommerce it’s about recommenders, web logs, and click streams. In banking and credit you can make a career in anomaly detection for fraud and abuse. Whoever hires you is looking for these specific skills and experiences. Separately there is the long overdue spinoff of the Data Engineer from the Data Scientist. This is identification of a separate skills path that only began to be recognized a little over a year ago. The skills the data engineer needs to set up an instance in AWS, or implement Spark Streaming, or simply to create a data lake are different from the analytical skills of the data scientist. Maybe 10 years ago there were data scientists who had these skills but that’s akin to the early days of personal computers when some early computer geeks could actually assemble their own boxes. Not anymore. Prediction 3: Non-Data Scientists will perform a greater volume of fairly sophisticated analytics than data scientists. As recently as a few years ago the idea of the Citizen Data Scientist was regarded as either humorous or dangerous. How could someone, no matter how motivated, without several years of training and experience be trusted to create predictive analytics on which the financial success of the company relies? There is still a note of risk here. You certainly wouldn’t want to assign a sensitive analytic project to someone just starting out with no training. But the reality is that advanced analytic platforms, blending platforms, and data viz platforms have simply become easier to use, specifically in response to the demands of this group of users. And why have platform developers paid so much attention? Because Gartner says this group will grow 5X as fast as the trained data scientist group, so that’s where the money is. There will always be a knowledge and experience gap between the two groups, but if you’re managing the advanced analytics group for your company you know about the drive toward ‘data democratization’ which is a synonym for ‘self-service’. There will always be some risk here to be managed but a motivated LOB manager or experienced data analyst who has come up the learning curve can do some pretty sophisticated things on these new platforms. Langley Eide, Chief Strategy Officer at Alteryx suggests that we think of these users along a continuum from no-code to low-code to code-friendly. They are going to want a seat at our common analytic platforms. They will need supervision, but they will also produce a volume of good analytic work and at very least can leverage the time and skills of your data scientists. Prediction 4: Deep learning is complicated and hard. Not many data scientists are skilled in this area and that will hold back the application of AI until the deep learning platforms are significantly simplified and productized. There’s lots of talk about moving AI into the enterprise and certainly a lot of VC money backing AI startups. But almost exclusively these are companies looking to apply some capability of deep learning to a real world vertical or problem set, not looking to improve the tool. Gartner says that by 2018, deep neural networks will be a standard component of 80% of data scientists’ tool boxes. I say, I’ll take that bet, that’s way too optimistic. The folks trying to simplify deep learning are the major cloud and DL providers, Amazon, Microsoft, Google, Intel, NVDIA, and their friends. But as it stands today, first good luck finding a well-qualified data scientists with the skills to do this work (have you seen the salaries they have to pay to attract these folks?). Second, the platforms remain exceedingly complex and expensive to use. Training time for a model is measured in weeks unless you rent a large number of expensive GPU nodes, and still many of these models fail to train at all. The optimization of hyperparameters is poorly understood and I expect some are not even correctly recognized as yet. We’ll all look forward to using these DL tools when they become as reasonable to use as the other algorithms in our tool kit. The first provider to deliver that level of simplicity will be richly rewarded. It won’t be in 2018. Prediction 5: Despite the hype, penetration of AI and deep learning into the broader market will be relatively narrow and slower than you think. AI and deep learning seems to be headed everywhere at once and there are no shortages of articles on how or where to apply AI in every business. My sense is that these applications will come but much slower than most might expect. First, what we understand as commercially ready deep learning driven AI is actually limited to two primary areas, text and speech processing, and image and video processing. Both these areas are sufficiently reliable to be commercially viable and are actively being adopted. The primary appearance of AI outside of tech will continue to be NLP Chatbots, both as input and output to a variety of query systems ranging from customer service replacements to interfaces on our software and personal devices. As we wrote in our recent series on chatbots, in 2015 only 25% of companies had even heard of chatbots. By 2017, 75% had plans to build one. Voice and text is rapidly becoming a user interface of choice in all our systems and 2018 will see a rapid implementation of that trend. However, other aspects of deep learning AI like image and video recognition, outside of facial recognition is pretty limited. There will be some adoption of facial and gesture recognition but those aren’t capabilities that are likely to delight customers at Macy’s, Starbucks, or the grocery store. There are some interesting emerging developments in using CNNs and RNNs to optimize software integration and other relatively obscure applications not likely to get much attention soon. And of course there are our self-driving cars based on reinforcement learning but I wouldn’t camp out at your dealership in 2018. Prediction 6: The public (and the government) will start to take a hard look at social and privacy implications of AI, both intended and unintended. This hasn’t been so much a tsunami as a steadily rising tide that started back with predictive analytics tracking our clicks, our locations, and even more. The EU has acted on its right to privacy and the right to be forgotten now documented in their new GDPR regs just now taking effect. In the US the good news is that the government hasn’t yet stepped in to create regulations this draconian. Yes there have been restrictions placed on the algorithms and data we can use for some lending and health models in the name of transparency. This also makes these models less efficient and therefore more prone to error. Also, the public is rapidly realizing that AI is not currently able to identify rare events with sufficient accuracy to protect them. After touting their AI’s ability to spot fake news, or to spot and delete hate speech or criminals trolling for underage children, Facebook, YouTube, Twitter, Instagram, and all the others have been rapidly fessing up that the only way to control this is with legions of human reviewers. This does need to be solved. Still, IMHO on line tracking and even location tracking through our personal devices is worth the intrusion in terms of the efficiency and lower cost it creates. After all, the materials those algorithms present to you on line are more tailored to your tastes and since it reduces advertising cost, should also reduce the cost of what you buy. You can always opt out or turn off the device. However, this is small beer compared to what’s coming. Thanks largely to advances in deep learning applied to image recognition, researchers have recently demonstrated peer-reviewed and well-designed data science studies that show that they can determine criminals from non-criminals, and gays from straights with remarkable levels of accuracy based only on facial recognition. The principle issue is that while you can turn off your phone or opt out of on-line tracking that the proliferation of video cameras tracking and recording our faces makes it impossible to opt out of being placed in facial recognition databases. There have not yet been any widely publicized adverse impacts of these systems. But this is an unintended consequence waiting to happen. It could well happen in 2018.
  5. What are your Predictions about Data Science, Machine Learning, AI & Analytics for 2018
  6. Hi there, I'm a 2nd year student in a bachelor of business informstion systems uni degree. Link to the course: https://www.open.edu.au/courses/it/swinburne-university-of-technology-bachelor-of-business-information-systems--swi-cis-deg-2017 I have a couple questions i was hoping someone with experience in the field could help me with. Im stuck between what majors to choose from. I have the option of business analysis and data analytics. Then a choice between a second major between economics and programming. I do have a interest in programming and im new in this field of study with a web/graphic design background. I would love to know anyones thoughts or opinions on the areas i should move into. Thank you in advanced. Brandon.
  7. I have rice mill how to maintain my account
  8. BI on prem

    Has anyone used PowerBI on prem? Cloud won't work for my companies security. Does it essentially work the same as the standard Pro version I'm used to except no cloud?
  9. Web focus

    Has anyone had experience with web focus? My company uses it but I want to get switched to Power BI
  10. @Emilio Regarding use of pivot table Suppose you have your data stored in the form of excel table. Then by inserting pivot table, you can analyse it or create some summary by using table's headers as columns or rows etc in pivot table as per your choice. Further, you can add slicers and use filters to see the pivot table responding to your choices. You can also add a pivot chart based on this.
  11. Microsoft Power BI

    Power BI is strong and efficient combining these features 1. Power Query (Data Cleansing) 2. Power Pivot (Data Modelling) 3. Power View (Data Visualization) 4. Power Maps (Geo Visuals) Download Free desktop version of Microsoft Power BI https://powerbi.microsoft.com/en-us/blog/power-bi-desktop-july-feature-summary-2/ Note: Every Month it is updated Power BI Web App requires Work or School Account to open and Provides 1 GB of Cloud Space. Also 365 days trial for PRO version and Available for Android / ios Mobiles. https://powerbi.microsoft.com/en-us/ To learn and practice try https://mva.microsoft.com - Microsoft Virtual Academy https://www.edx.org - Online courses
  12. Microsoft Power BI

    Excellent Resource on Microsoft Power BI by Alberto Ferrari and Marco Russo from Microsoft Press Store at Zero Cost https://www.microsoftpressstore.com/store/introducing-power-bi-9781509302284#downloads Power BI book along companion data sets.
  13. Excel VLookUp made easy

    Hello Team, thanks for connecting. I went through your profile and found it very interesting. I have a task on Business Intelligence. Please, would you mind giving me guidance? Thank you.
  14. Hello Everyone! I am very pleased to be a member here. I hope to make friends and offer help to those who are in need of Professional Engineering Services

    Samuel Peyton

    Peyton Engineering Services, LLC

  15. These techniques cover most of what data scientists and related practitioners are using in their daily activities, whether they use solutions offered by a vendor, or whether they design proprietary tools The 45 data science techniques Linear Regression Logistic Regression Jackknife Regression * Density Estimation Confidence Interval Test of Hypotheses Pattern Recognition Clustering - (aka Unsupervised Learning) Supervised Learning Time Series Decision Trees Random Numbers Monte-Carlo Simulation Bayesian Statistics Naive Bayes Principal Component Analysis - (PCA) Ensembles Neural Networks Support Vector Machine - (SVM) Nearest Neighbors - (k-NN) Feature Selection - (aka Variable Reduction) Indexation / Cataloguing * (Geo-) Spatial Modeling Recommendation Engine * Search Engine * Attribution Modeling * Collaborative Filtering * Rule System Linkage Analysis Association Rules Scoring Engine Segmentation Predictive Modeling Graphs Deep Learning Game Theory Imputation Survival Analysis Arbitrage Lift Modeling Yield Optimization Cross-Validation Model Fitting Relevancy Algorithm * Experimental Design
  16. help

    hi dear freinds i need a logiciel of simatic step7
  17. Microsoft Excel is an amazing piece of software, and even regular users might not be getting as much out of it as they can. Improve your Excel efficiency and proficiency with these basic shortcuts and functions that absolutely everyone needs to know. 1. Jump from worksheet to worksheet with Ctrl + PgDn and Ctrl + PgUp 2. Jump to the end of a data range or the next data range with Ctrl + Arrow Of course you can move from cell to cell with arrow keys. But if you want to get around faster, hold down the Ctrl key and hit the arrow keys to get farther: 3. Add the Shift key to select data Ctrl + Shift +Arrow will extend the current selection to the last nonblank cell in that direction: 4. Double click to copy down To copy a formula or value down the length of your data set, you don't need to hold and drag the mouse all the way down. Just double click the tiny box at the bottom right-hand corner of the cell: 5. Use shortcuts to quickly format values For a number with two decimal points, use Ctrl + Shift + !. For dollars use Ctrl + Shift + $. For percentages it's Ctrl + Shift + %. The last two should be pretty easy to remember: 6. Lock cells with F4 When copying formulas in Excel, sometimes you want your input cells to move with your formulas BUT SOMETIMES YOU DON'T. When you want to lock one of your inputs you need to put dollar signs before the column letter and row number. Typing in the dollar signs is insane and a huge waste of time. Instead, after you select your cell, hit F4 to insert the dollar signs and lock the cell. If you continue to hit the F4 key, it will cycle through different options: lock cell, lock row number, lock column letter, no lock. 7. Summarize data with CountIF and SumIF CountIF will count the number of times a value appears in a selected range. The first input is the range of values you want to count in. The second input is the criteria, or particular value, you are looking for. Below we are counting the number of stories in column B written by the selected author: COUNTIF(range,criteria) SumIF will add up values in a range when the value in a corresponding range matches your criteria. Here we want to count the total number of views for each author. Our sum range is different from the range with the authors' names, but the two ranges are the same size. We are adding up the number of views in column E when the author name in column B matches the selected name.  SUMIF(range,criteria,sum range) 8. Pull out the exact data you want with VLOOKUP VLOOKUP looks for a value in the leftmost column of a data range and will return any value to the right of it. Here we have a list of law schools with school rankings in the first column. We want to use VLOOKUP to create a list of the top 5 ranked schools. VLOOKUP(lookup value,data range,column number,type) The first input is the lookup value. Here we use the ranking we want to find. The second input is the data range that contains the values we are looking up in the leftmost column and the information we're trying to get in the columns to the right. The third input is the column number of the value you want to return. We want the school name, and this is in the second column of our data range. The last input tells Excel if you want an exact match or an approximate match. For an exact match write FALSE or 0. 9. Use & to combine text strings Here we have a column of first names and last names. We can create a column with full names by using &. In Excel, & joins together two or more pieces of text. Don't forget to put a space between the names. Your formula will look like this =[First Name]&" "&[Last Name]. You can mix cell references with actual text as long as the text you want to include is surrounded by quotes: 10. Clean up text with LEFT, RIGHT and LEN These text formulas are great for cleaning up data. Here we have state abbreviations combined with state names with a dash in between. We can use the LEFT function to return the state abbreviation. LEFT grabs a specified number of characters from the start of a text string. The first input is the text string. The second input is the number of characters you want. In our case, we want the first two characters: LEFT(text string, number of characters) If you want to pull the names of the states out of this text string you have to use the RIGHT function. RIGHT grabs a number of characters from the right end of a text string. But how many characters on the right do you want? All but three, since the state names all come after the state's two-letter abbreviation and a dash. This is where LEN comes in handy. LEN will count the number of characters or length of the text string. LEN(text string) Now you can use a combination of RIGHT and LEN to pull out the state names. Since we want all but the first three characters, we take the length of our string, subtract 3, and pull that many characters from the right end of the string: RIGHT(text string,number of characters) 11. Generate random values with RAND You can use RAND() function to generate a random value between 0 and 1. D0 not include any inputs, just leave the parentheses empty. New random values will be generated every time the workbook recalculates. You can force it to recalculate by hitting F9. But be careful. It also recalculates when you make other changes to the workbook: RAND()
  18. Berniece

    Hi, I have set up a pivot, the month % and the Rand value difference calculation works. I am trying to add a YTD % which I cannot get right. I have attached an example. Hope someone can assist or put my mind to rest that it cannot be done. Thank you Berniece Pivot Test.xlsx
  19. FREE text : Forecasting

    Forecasting: Priciples and Practice http://robjhyndman.com/uwafiles/fpp-notes.pdf
  20. FREE text : Forecasting

    Forecasting: Priciples and Practice http://robjhyndman.com/uwafiles/fpp-notes.pdf
  21. Excel VLookUp made easy

    Hi Friends, This is my first video tutorial for VLookUp. Hope you all will like it Download the exercise file Vlookup example.xlsx
  22. Simple Regression Analysis in R

    Regression Analysis The basic concept of Regression in Statistics is establishing a cause – effect relationship between two or more variables. The Cause is better referred to as the Independent Variable(s). And the effect is the Dependent Variable. When we regress the dependent variable on the independent one(s) using a regression equation, we obtain a Regression Coefficient which is a measure of the degree of linear association between the cause and the effect. The value of this coefficient always ranges from -1 to +1. A value close to or equal to 1 implies high or perfect positive linear association respectively. Likewise, a value close to or equal to -1 implies high or prefect negative linear association respectively. A value close to or equal to 0 implies little or no linear association respectively. When dealing with real-life data, we resort to Regression Analysis for estimating the relationships among the various variables. It includes a number of modeling techniques to analyze the association between one dependent variable and one or more independent variables (also called Predictors). Here we will deal with some of the techniques that can be employed by the statistical software R in regression analysis. Simple Linear Regression Here, we investigate the relationship between one dependent variable and one independent variable. R provides a number of tools to achieve our objective in this regard. Let us consider an example having the length of the snout vent and weight of alligators as the independent and the dependent variable respectively. Our objective is to determine the degree of linear association between them. First we create a data frame to store the data. Note, that the observations have been transformed to the log scale. R Code: alligator = data.frame( lnLength = c(3.87, 3.61, 4.33, 3.43, 3.81, 3.83, 3.46, 3.76,3.50, 3.58, 4.19, 3.78, 3.71, 3.73, 3.78), lnWeight = c(4.87, 3.93, 6.46, 3.33, 4.38, 4.70, 3.50, 4.50,3.58, 3.64, 5.90, 4.43, 4.38, 4.42, 4.25)) Now, we first perform some exploratory data analysis (graphical analysis) to visually project our data. plot(lnWeight ~ lnLength, data = alligator, xlab = "Snout vent length (inches) on log scale", ylab = "Weight (pounds) on log scale", main = "Figure 1: Alligators in Central Florida") Output: Figure 1.bmp The graph suggests that weight (on the log scale) increases linearly with snout vent length (again on the log scale). Thus, we fit a simple linear regression model to the data and save the fitted model to an object for further analysis: alli.mod1 = lm(lnWeight ~ lnLength, data = alligator) The lm function uses the data stored in alligator and fits a linear model with the weight as the dependent variable with length as the predictor (that is, regressing weight on length). summary(alli.mod1) The summary function gives the 5-point summary of the residuals (estimation error values) as well as the slope and intercept of the best fit regression line that models the data. Output: Call: lm(formula = lnWeight ~ lnLength, data = alligator) Residuals: Min 1Q Median 3Q Max -0.24348 -0.03186 0.03740 0.07727 0.12669 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -8.4761 0.5007 -16.93 3.08e-10 *** lnLength 3.4311 0.1330 25.80 1.49e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1229 on 13 degrees of freedom Multiple R-squared: 0.9808, Adjusted R-squared: 0.9794 F-statistic: 665.8 on 1 and 13 DF, p-value: 1.495e-12 As we see, besides the aforementioned statistics / measures, summary gives some useful other information as well. We get the values of R-square and Adjusted R-square. While R-square is simply the square of the regression coefficient giving us an idea of the goodness of fit, Adjusted R-square also compensates for the number of predictors used in order to avoid overestimation of the strength of association. In this example though, the two values don’t differ much as only one predictor has been used. plot(resid(alli.mod1) ~ fitted(alli.mod1), xlab = "Fitted Values", ylab = "Residuals", main = "Figure 2: Residual Diagnostic Plot") abline(h=0) Now, we use the above code to generate a scatterplot of the Residuals against the Fitted values to check for systematic patterns. Presence of a pattern or trend in the residual plot would indicate a poor fit as the errors are supposed to be randomly distributed about mean 0. Output: Figure 2.bmp The absence of the any definite pattern in the residuals indicates that our model is a good one. In case we do find a pattern, we will require to further tweak our model to provide a better fit.
  23. Tricks That Can Make Anyone An Excel Expert

    Concerning item 14: where do we find the preview area? When cells are formatted with ;;; → the content does not count anymore, so not only invisible, but also uncountable. Is this what we want? I myself make cells invisible by formatting the text color of the selected cells to 'white' , eventually combined with conditional formatting if all cells are formatted. Thank you
  24. Thanks Saurabh, Can you also give some tips and uses of the pivot tables with which you start your post ? That would be most helpful. Thank you Emilio
  25. حَدَّثَنَا حَجَّاجٌ، قَالَ حَدَّثَنَا شُعْبَةُ، قَالَ أَخْبَرَنِي عَلِيُّ بْنُ مُدْرِكٍ، عَنْ أَبِي زُرْعَةَ، عَنْ جَرِيرٍ، أَنَّ النَّبِيَّ صلى الله عليه وسلم قَالَ لَهُ فِي حَجَّةِ الْوَدَاعِ ‏"‏ اسْتَنْصِتِ النَّاسَ ‏"‏ فَقَالَ ‏"‏ لاَ تَرْجِعُوا بَعْدِي كُفَّارًا يَضْرِبُ بَعْضُكُمْ رِقَابَ بَعْضٍ


    Narrated By Jarir : The Prophet said to me during Hajjat-al-Wida': Let the people keep quiet and listen. Then he said (addressing the people), "Do not (become infidels) revert to disbelief after me by striking the necks (cutting the throats) of one another (killing each other)."


    حضرت جریر رضی اللہ عنہ سے روایت ہےکہ نبیﷺ نے حجۃ الوداع کے موقع پر مجھے فرمایا لوگوں کو خاموش کراؤ (جب لوگ خاموش ہو گئے تو) آپﷺ نے فرمایا: میرے بعد ایک دوسرے کو قتل کر کے کافر نہ بن جانا۔

  26. اللھم يافارج الهم ، ياكاشف الغم فرج همى ويسر امرى وارحم ضعفى وقلة حيلتى وارزقنى من حيث لا احتسب يارب العالمين

  27. امام غزالی رح  فرماتے ہیں : 
    جو غلطی کر نہیں سکتا وہ فرشتہ ھے 
    جو غلطی کرکے اس پر ڈٹ جاۓ وہ شیطان ھے 
    اور جو غلطی کرکے فورا توبہ کرلے وہ انسان ھے 
    اور جو توبہ کرکے اس پر قائم رھے وہ الله کا محبوب بندہ ھے.
    اللہ ہم سب پر رحم فرمائے۔آمین

  1. Load more activity
  • Who's Online   0 Members, 0 Anonymous, 1 Guest (See full list)

    There are no registered users currently online

  • Member Statistics

    788
    Total Members
    21
    Most Online
    Nassnouss09
    Newest Member
    Nassnouss09
    Joined



  • Recent Status Updates

    • sdpeyton

      Hello Everyone! I am very pleased to be a member here. I hope to make friends and offer help to those who are in need of Professional Engineering Services
      Samuel Peyton
      Peyton Engineering Services, LLC
      · 0 replies
    • Asim Siddiq

      حَدَّثَنَا حَجَّاجٌ، قَالَ حَدَّثَنَا شُعْبَةُ، قَالَ أَخْبَرَنِي عَلِيُّ بْنُ مُدْرِكٍ، عَنْ أَبِي زُرْعَةَ، عَنْ جَرِيرٍ، أَنَّ النَّبِيَّ صلى الله عليه وسلم قَالَ لَهُ فِي حَجَّةِ الْوَدَاعِ ‏"‏ اسْتَنْصِتِ النَّاسَ ‏"‏ فَقَالَ ‏"‏ لاَ تَرْجِعُوا بَعْدِي كُفَّارًا يَضْرِبُ بَعْضُكُمْ رِقَابَ بَعْضٍ

      Narrated By Jarir : The Prophet said to me during Hajjat-al-Wida': Let the people keep quiet and listen. Then he said (addressing the people), "Do not (become infidels) revert to disbelief after me by striking the necks (cutting the throats) of one another (killing each other)."

      حضرت جریر رضی اللہ عنہ سے روایت ہےکہ نبیﷺ نے حجۃ الوداع کے موقع پر مجھے فرمایا لوگوں کو خاموش کراؤ (جب لوگ خاموش ہو گئے تو) آپﷺ نے فرمایا: میرے بعد ایک دوسرے کو قتل کر کے کافر نہ بن جانا۔
      · 0 replies
    • Asim Siddiq

      اللھم يافارج الهم ، ياكاشف الغم فرج همى ويسر امرى وارحم ضعفى وقلة حيلتى وارزقنى من حيث لا احتسب يارب العالمين
      · 0 replies
    • Asim Siddiq

      امام غزالی رح  فرماتے ہیں : 
      جو غلطی کر نہیں سکتا وہ فرشتہ ھے 
      جو غلطی کرکے اس پر ڈٹ جاۓ وہ شیطان ھے 
      اور جو غلطی کرکے فورا توبہ کرلے وہ انسان ھے 
      اور جو توبہ کرکے اس پر قائم رھے وہ الله کا محبوب بندہ ھے.
      اللہ ہم سب پر رحم فرمائے۔آمین
      · 0 replies
×